Next Article in Journal
Design and Initial Testing of an Affordable and Accessible Smart Compression Garment to Measure Physical Activity Using Conductive Paint Stretch Sensors
Previous Article in Journal
Guided User Research Methods for Experience Design—A New Approach to Focus Groups and Cultural Probes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework on Division of Work Tasks between Humans and Robots in the Home

1
Design of Information Systems Research Group, Department of Informatics, Faculty of Mathematics and Natural Sciences, University of Oslo, 0373 Oslo, Norway
2
Research Group for Robotics and Intelligent Systems, Department of Informatics, Faculty of Mathematics and Natural Sciences, University of Oslo, 0373 Oslo, Norway
3
Faculty of Health Studies, VID Specialized University, 0370 Oslo, Norway
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2020, 4(3), 44; https://doi.org/10.3390/mti4030044
Submission received: 26 May 2020 / Revised: 13 July 2020 / Accepted: 23 July 2020 / Published: 27 July 2020

Abstract

:
This paper analyzes work activity in the home, e.g., cleaning, performed by two actors, a human and a robot. Nowadays, there are attempts to automate this activity through the use of robots. However, the activity of cleaning, in and of itself, is not important; it is used instrumentally to understand if and how robots can be integrated within current and future homes. The theoretical framework of the paper is based on empirical work collected as part of the Multimodal Elderly Care Systems (MECS) project. The study proposes a framework for the division of work tasks between humans and robots. The framework is anchored within existing research and our empirical findings. Swim-lane diagrams are used to visualize the tasks performed (WHAT), by each of the two actors, to ascertain the tasks’ temporality (WHEN), and their distribution and transitioning from one actor to the other (WHERE). The study presents the framework of various dimensions of work tasks, such as the types of work tasks, but also the temporality and spatiality of tasks, illustrating linear, parallel, sequential, and distributed tasks in a shared or non-shared space. The study’s contribution lies in its foundation for analyzing work tasks that robots integrated into or used in the home may generate for humans, along with their multimodal interactions. Finally, the framework can be used to visualize, plan, and design work tasks for the human and for the robot, respectively, and their work division.

1. Introduction

This study is an empirical study, which is part of the Multimodal Elderly Care Systems (MECS) project. This project aims to develop a robot to be used in home-care services for the elderly. Within the framework of the MECS project, this paper aims to investigate semi-autonomous robots, as moving entities in the home that change the tasks and routines of the people living there. To illustrate this, we have studied the current literature on what types of robots are employed in the home of the independent living elderlies [1,2] and non-elderlies. Thus, we found that several previous studies have shown that instrumental use of vacuum cleaner robots is useful to understand the design implications of introducing robots in the homes (see [3,4,5]). It seems that many of the elderly consider cleaning their homes as a work activity that requires a significant amount of physical effort and concentration. The literature shows that they often need support with this type of work activity. Similarly, the majority of the elderly in the MECS project, during an initial phase of the study, talked about robots and how they often wished for personal domestic service robots that could help them with work tasks in the home [6,7]. Moreover, the activity of cleaning was demonstrated to be of high importance for the elderly. Furthermore, many of them usually received help with home cleaning every other week. In this way, the researchers in the project decided to offer some personal service robots, such as semi-autonomous vacuum-cleaner robots, e.g., iRobot Roomba and Neato BotVac, to mutually help each other: we were interested in what kind of work tasks are generated with the introduction of a moving object—a semi-autonomous robot—in the home, while the elderly were interested in receiving help with the cleaning task. In this way, we could instrumentally use the data gathered to explore the work tasks that come along with the introduction of a robot in a home setting.
Thus, this paper investigates and abstracts the types of work tasks generated by introducing a semi-autonomous robot in the home. The research question addressed in this paper is: what are the types of work tasks that are generated by a robot, a semi-autonomous moving entity, in the home? To answer this question, we introduced a robot vacuum cleaner to the homes of several participants. Furthermore, to answer our research question, we based our theoretical framework on a model developed by Verne [7] and Verne and Bratteteig [8]. We use swim-lane diagrams to visualize the tasks performed by each of the two actors (WHAT), e.g., the human and the robot, to ascertain the tasks’ temporality (WHEN), and their spatial distribution and transitioning from one actor to the other (WHERE). In this way, the paper adds new dimensions to the existing theoretical framework, about the temporality and spatiality of tasks, illustrating linear, parallel, sequential, and distributed tasks. The contribution of this paper is the further development of the theoretical framework developed by Verne [7] and Verne and Bratteteig [8], by adding new dimensions to it. The framework lays the foundation for analyzing work tasks that robots integrated into or used in home settings may generate for humans. This is useful for the MECS project and outside of it, when analyzing the new, redundant, temporally, or spatially distributed work tasks. However, the framework can also be used in other settings, as we later show in the paper. Specifically, the framework helps us in identifying, understanding, visualizing, planning, and designing automated work tasks carried out by the robot, and by the human, when introducing a robot in a home setting—a cluttered and dynamic environment.
This paper continues in Section 2 by defining concepts such as work and tasks, as defined in the computer-supported cooperative work (CSCW) literature. Section 3, Literature Review, gives an overview of similar studies that have been undertaken and which are relevant to this work. In this section, we also present how this study is different and why it is important. In Section 4, Theoretical Framework, the theoretical model from Verne [7] and Verne and Bratteteig [8] are described. Section 5, Methodology and Methods, is then presented in detail. Section 6 thereafter presents empirical data, visualizes different types of tasks, their dimensions, and finally presents the resulted framework. Finally, Section 7, Discussion, reflects on the method and setting of the study and discusses the proposed framework. A Summary and Conclusion section, followed by Future Work, follows thereafter.

2. Definitions of Concepts

2.1. Division of Work

Strauss [9] wrote first about work and the division of labor, to understand work in complex projects. He referred to all tasks that make up the work as ‘the arc of work’ (p. 2). The arc of work ‘for any given trajectory’ is defined as: ‘consist[ing] of the totality of tasks arrayed both sequentially and simultaneously along the course of the trajectory or of the project’. According to the author, some of the tasks are foreseen; however, some are unplanned and may occur unexpectedly during the trajectory. An arc of work may include arc phases, types of work, clusters of tasks, and articulation of tasks. Sometimes the division of work that makes up the arc of work is based on the particular skills of the actors. To understand the tasks as part of the work activity is to understand the division of work.

2.2. Actors’ Part in the Division of Work

Within coordinative work, several objects of coordination are identified as being part of the division of work, such as actors, roles, responsibilities and obligations, tasks with an operational intention, activities, conceptual structures, and other types of resources [10]. According to the Oxford English Dictionary, an actor is a participant who takes part in a process or action [11]. The two actors considered in this study were involved in the work activity of cleaning: the human and the robot. Discussing if and how the robot is both an actor and the main tool of the work activity is outside the scope of this paper.

2.3. Framing the Concept of Work Tasks

Work tasks are related to questions such as: ‘what, where, when, how, for how long, how complex, how well defined are their boundaries, how attainable are they under current working conditions, how precisely are they defined […], and what is the expected level of the performance’ [9]. According to Gasser [12], a work situation is translated into a work task and its context (p. 211). In this study, the work tasks were related to performing all the necessary steps involved in the work activity of cleaning. The context was that of cleaning, a shared activity between the actors: the human and the robot. According to the author, a work task involves an agenda, a place where and an interval when it is executed; it requires several resources, and has to be carried out by one or several people. However, in this study, a robot was considered to be a type of actor, as previously mentioned. Each task is part of a division of labor, a system of tasks, referred to in this study as the work activity, and it is related to other tasks [12]. A task chain is made up of two or several work tasks that come one after another, sequentially. In complex structures, where the division of labor involves many tasks that may intersect with each other, the tasks form the production lattice. The work tasks in the production lattice need to be aligned according to the resources available—both material and human.

2.4. Why We Need to Understand the Division of Work between the Actors and Their Tasks

Each of the actors performs certain tasks that contribute to the actor operation, whether the human operation or the robot operation. The system of all the tasks that are included in the operations performed by each of the actors forms the work activity. In this case, the joint work activity is cleaning. To understand the tasks performed by the human and the robot, respectively, we need to understand the concept of work tasks first, and how they are part of the division of work. Moreover, using certain concepts to understand the division of work helps us to understand the accountability for work: who does what, what resources are allocated to whom (the human or the robot) and when, and what situations are encountered while the work is performed.

3. Literature Review

In general, the literature shows that studies on robots used within the home environment are sometimes conducted in virtual mock-home environments or living labs. These studies often fail to reproduce the complexity of a real home as an environment to navigate for a robot [13]. In this section, we present a literature review on some of the existing studies investigating the use of robots in the home. Moreover, we also present some of the existing frameworks investigating robots in the home or the work that comes along with automation.

3.1. Robots in the Home

One study by Sung et al. [14] looked into understanding domestic robot owners, through an examination of the Roomba vacuum cleaner robot. The study was based on an online Internet survey. While the study lacked data collected directly from the natural setting where the events occur, the study still confirmed the changes performed by the robots’ owners in the home, to facilitate the robots’ navigation of the rooms, a phenomenon referred to as ‘roombarization’ [14]. Moreover, based on the survey’s respondents’ answers, the study illustrated that the users were engaging in tasks such as: watching the robot, ascribing gender to the robot, naming the robot, talking to the robot, hacking its system, or playing and experimenting with the robot [14].
We know, however, from earlier research that several studies have been conducted investigating the use of robots in a natural setting, such as the home. For instance, Forlizzi and DiSalvo [3] explained how studies about domestic robots are mostly carried out in experimental settings in the lab, and they argued for more studies ‘in the wild’. As such, they conducted a study on service robots in the domestic environment, using the Roomba vacuum cleaner. According to the authors, the practice of cleaning also reflects the structure of the home. Amongst their findings, they explained that the participants expected the robot to develop its knowledge over time; that the floors needed to be clutter-free; that many participants had to do workarounds to facilitate the robot’s movement; that the robot often bumped into things; that the participants developed social relationships with the product; and that the robot became a value-laden symbol.
Along the same lines, other studies confirmed the need for more studies on robot use in the home environment. For instance, some studies have investigated the home organization to inform domestic robot behavior [13,15]. A study on kitchen organization explained how the home, a personal space, gives access to information that otherwise is hard to extract from photos, videos, or other sensor data [13]. The study argued that while there is an increasing interest in domestic robots, there is a lack of knowledge to illustrate the complexity of the home [13]. Moreover, the study also supported the idea that it is necessary to understand the users’ needs and demands [13]. However, assigning tasks to robots implies not only technical challenges but also the calibration of the users’ expectations of the robots’ capabilities [15].
Furthermore, Forlizzi [16] conducted a study focusing on how robotic vacuum cleaning products become social products in the home. She explained in the study how the home is an interesting place to study new social robotic products since many human needs reside in a home environment. Her ethnographic study was conducted in the home of the elderly and non-elderly. Her findings illustrated that robotic products in the home triggered changes in household activities and tasks undertaken by the household members, and the nature of their work, i.e., ‘who cleaned and how they cleaned’, the frequency of their cleaning activities, and giving more autonomy to the robot [16], p. 133.
Forlizzi [4] developed the product ecology framework by studying the long-term use of robotic vacuum cleaners, such as the Roomba Discovery and the Hoover Flair, in the homes of the elderly and non-elderly. She developed this framework to understand social relationships and users’ experiences as developed when using such intelligent robotic products [4]. As she said, the ‘performance levels [of the elderly] decline more when they are coping with environments built for younger people’ [4] (p. 10). Moreover, she showed in her study that many elderly people had reduced mobility, cognitive impairments, and encountered challenges in performing household activities. According to her, the inability to cope with home maintenance created fear and anxiety amongst elderly people. This sometimes led the users to downsize their home, giving up personal items, and even moving into a care facility—thus, leading them to a ‘reconstruction of the self’ [4]. At the same time, she also explained that, in general, robotic products are not built with any consideration for the aesthetics and social, and emotional relations that the elderly people build with the product [4]. Moreover, according to her, the structure of most homes is not currently designed to facilitate such moving objects. She argued that homes of the future should be able to accommodate ubiquitous services and automated service robots, along with allowing elderly people to retain their integrity, dignity, and independence. In addition, Sung et al. [17] recognized that robots shape relationships in the home. The authors conducted long-term studies in 30 households, where Roomba vacuum cleaners were deployed and observed. As a result of the study, the Domestic Robot Ecology (DRE) was framed. The study identified that the robots were considered to fulfill varying roles as a tool, an agent, a mediating factor for change, or a mediator for modifying relationships amongst the household members. However, while the authors mentioned that the robot triggered new types of domestic tasks, these were neither identified nor explored in-depth.

3.2. Available Frameworks

Some theoretical frameworks studying the use of robots in the home or the work that comes with automation were presented by Soma et al. [18], You and Robert [19], Ijtsma et al. [20], Lee and Paine [21], and Ajoudani et al. [22].
For instance, Soma et al. [18] presented the robot facilitation framework. The framework talks about different types of facilitation that the human needs to do when a robot is introduced in a hospital setting or a home. The types of facilitation described are pre-, peri- and post-facilitation. This framework seems interesting. However, the framework limits itself to different types of work carried out only by the human, not also by the robot. Moreover, the framework only addresses the temporal perspective of the work carried out by the human before, during, and after the work activity.
Furthermore, You and Robert [19] talked about the human-robot teams framed in the Inputs-Mediators-Outputs-Inputs (IMOI) framework. According to the authors, robots become more and more part of the teams and thus participate in teamwork. However, they also pointed out that frameworks for understanding the human-robot teams and their work, which enable or hinder them, are still lacking. They also emphasized that the existing frameworks often focus either on situational awareness or on their workload. The authors also argued that none of the existing frameworks focus specifically on human-robot teamwork as dynamic and adaptive teams where the actors need to adjust their actions throughout the life cycle. Thus, the authors proposed the IMOI framework that includes inputs, mediators, and outputs. They argued that these are some of the parts of the key elements of the teams’ and the actors’ life cycles. This can be an interesting framework; however, it is limited to theory, lacking empirical evidence that illustrates and exemplifies the proposed framework.
Along the same lines, Ijtsma et al. [20] talked about simulating human-robot teamwork dynamics for improving the work strategies in human-robot teams. They proposed the visualization of work between different actors, or agents, as they called them, through graph network visualization. In other words, they illustrated different strategies adopted by the team members, to identify eventual dependencies or constraints between the actors, and whether or not the work to be carried out by different actors is feasible at given times. Specifically, the study simulated the work dynamics in a human-robot team for space operations. The simulated team was structured from two astronauts and a rover. While this work provided significant insights for human-robot teamwork, the work was limited from two perspectives: (1) it was limited to a simulated environment, and (2) it did not simulate a home setting—a complex, dynamic and cluttered place.
Further, Lee and Paine [21] talked about the Model for Coordinated Action (MoCA). They described it as the actions taken by the actors involved in a work activity with a shared goal through one or several “overlapping fields of actions” (p. 6). The authors described seven dimensions of the MoCA. (1) The first one is the synchronicity of work amongst the actors. (2) The second dimension is the physical distribution of the actors’ actions. (3) The third one is the scale representing the number of actors involved in a shared work activity. (4) The fourth one is the number of communities of practice involved in the work activity. (5) The fifth one is the nascence, referring to new and old coordinated actors’ actions. (6) The sixth one refers to the planned permanence of the collaborative arrangement, where the coordinated action can be temporary or permanent. (7) Finally, the last dimension is turnover. The turnover refers to how stable the actors participating in shared work activity are. According to the authors, some of these dimensions, such as (8) nascence and (9) planned permanence, are less explored in the CSCW. This is also one of the reasons why we wish in this paper to address these dimensions through our proposed framework.
Finally, Ajoudani et al. [22] presented the state-of-the-art on human-robot collaboration. The authors emphasized that for a successful human-robot collaboration, a shared authority framework needs to be established between the two actors. The authors argued that while the hardware components are crucial in such a collaboration, there are also other factors, such as the intermediate interfaces between the human and the robot and the control or interaction modalities. According to the authors, multimodal interaction modalities, through feed-forward and feedback communication channels, can address even complex interaction scenarios. Their review paper, which was rich in examples, still lacked the application of the framework to specific use-cases.
We continue in the next section with a theoretical framework that is more appropriate for our area of interest, which we later will develop further, by adding new dimensions to it.

4. Theoretical Framework: The Model from Verne (2015) and Verne and Bratteteig (2016)

As we have shown in our literature review, many studies have investigated the use of robots in the home or have talked about robot frameworks. Although these studies informed our research, they neither focused on the types of tasks shared between humans and the robot nor on which types of tasks become automated and which do not. However, we identified one study that is relevant for our work, namely the study from Verne [23]. Verne’s [23] study on the lawnmower robot used the theoretical framework on tasks developed in her Ph.D. thesis [7], and in her co-authored paper (see [8]). Since the author(s) focused on the tasks that arise as a consequence of the automation of work, we found this theoretical framework relevant and useful for this study. This framework indirectly fulfills dimension (5) on the nascence of work, and (6) on planned permanence, as explained in Lee and Paine’s study [21]. The types of tasks that arise as a result of the automation of work are summarized in Table 1 below. However, these definitions only illustrate the tasks that come along with the automation of work for desktop interface systems, not robots.
The same author used the framework in a study with robots, where she showcased human adaptation to a robot lawnmower [23]. In an auto-ethnographic study, the author presented a robot mower automating certain human work tasks but also introduced new tasks to be performed by the human. While the expectation of the author, and also the user, was to acquire the robot lawnmower to automate maintenance work in the garden, she and her husband soon observed that both old tasks and new tasks in mowing the lawn were introduced. The work tasks in ensuring that the robot could carry out its work included old, new, and genuinely new tasks. The old tasks included: manual mowing of the lawn in areas which the robot did not reach and removing things from the lawn before mowing. The new tasks included: technical tasks to install the robot and its base station; removing obstacles from the lawn to avoid receiving error messages from the robot, i.e., one such task was to regularly pick up apples from the ground so the robot could run freely without cutting the apples; another involved hiring someone with a stump grubber to remove a tree stump from the garden, to offer a better navigation environment for the lawnmower robot; others involved manual work to remove the wood chippings, sowing new grass and repairing a patch and regularly checking if the robot was stuck, which interrupted other activities. Other new tasks that the users adopted to make the robot was working were: changing their habits in terms of watering the garden, as the robot did not function well when the lawn was wet; doing workarounds to protect the robot from the rain, as its electronics could be damaged irreparably; changing the layout of the garden and re-installation of the robot to optimize its performance.
However, the study from Verne [23] focused on a lawnmower and its work performed in the garden, not in the home. The garden is an outdoor place and is inherently different from the inside of a home. To inform the future development of robots used indoors, research into domestic robots is necessary to understand the work task division between humans and robots, new tasks that need to be undertaken, old tasks that are replaced, and redundant tasks. Therefore, this study is both interesting and relevant in answering our research question: what are the types of tasks and work that are generated by a robot, a semi-autonomous moving entity, in the home?

5. Methodology and Methods

The study followed an interpretative, analytical-qualitative approach. This section gives an overview of the participants, data collection, data analysis, and ethical considerations.

5.1. Participants

The participants in this study were elderly and non-elderly people. The study was based on data collected from 13 participants and ten other known household members. One of the households had a pet. The elderly participants were recruited through personal visits at the MECS partner accommodation facility for the elderly and snowball sampling (e.g., the elderly people passed information about the study on through word-of-mouth to other elderly people they knew). The non-elderly were recruited through personal contact. We chose these two groups due to three main reasons. First, we wished to see which robot suits the elderly best—i.e., being less technically difficult to use—therefore, we wished to test several robot types with the non-elderly participants. For instance, after a short use of Samsung PowerBot, we soon observed that the robot was not appropriate for the elderly’s use: partially because they required a robot that is small in size, and easy to interact with. Second, we wish to see if both groups experience the same kind of situations when having a semi-autonomous robot in their homes. Finally, we wished to see the technical level of difficulty encountered by both groups. However, our intention was never to compare the experiences from each of the participants, but to look at the situations experienced by them and investigate potential challenges that a robot may bring along when introduced in the home. Details about the participants are given in Table 2 and Table 3.

5.2. Data Collection

The elderly participants had the robots in their homes for about one month each, whereas the non-elderly people used the robots in their homes for about one week and one month. While the elderly participants lived alone, the non-elderly participants lived with other household members.
The experiences of the elderly participants were documented through the author’s notes and observations, the participants’ diary notes, 115 photos taken by the first author during visits to the elderly peoples’ homes, and several hours of audio-recorded semi-structured interviews that took place at the end of the study. The experiences of the non-elderly participants were documented through their diary notes and 32 photos. An overview of the data collected is given in Table 2 and Table 3.

5.3. Data Analysis

The interviews were fully transcribed verbatim by the first author (SD) and analyzed following Braun and Clarke’s [24] thematic analysis method. The steps followed were: (1) familiarization with the data, (2) coding each of the data collection resources (n = 222 codes), (3) collating the codes present across different data collection resources into initial themes, (4) reviewing the initial themes, (5) defining and naming the themes.
In the first step (step 1), we have familiarized ourselves with the data by creating a map of the data and data resources (Table 2 and Table 3). The research question was put aside to be open for eventual novelties that might emerge from the data. At this stage, we focused on what was interesting for the participants. During the next step (step 2), we coded the resources and grouped them in categories based on the data sources: interviews, first author’s diary notes, and observation notes, and the elderly’s diary notes. The data were coded line-by-line. The next step (step 3) was to collate the codes into sub-categories for each of the data sources. This was carried out by the first author (SD) and documented through color-coded post-it notes, as shown in Figure 1. The collated codes resulted in n = 222 codes that were then organized into themes. Some of the identified themes were repeated across the data coming from different sources. We paid careful attention to see if the elders and non-elders encountered the same type of issues with having a semi-autonomous in the home, and how they dealt with the challenges that arose. Finally, after the authors (SD and HJ) discussed the collated codes at multiple times, we have reviewed the themes (step 4). The final resulting themes (step 5) were: issues related to the robot deployment in the home (blue), issues related to the home space (red), and issues related to the human aspects, such as emotions and perceived autonomy (green). An overview of the final themes emerged can be found in Figure 2. In our earlier work, reported in [5], we focused on interpreting the experiences of the participants with the robot (blue and green themes). The focus of this study is understanding the types of tasks and work generated by a semi-autonomous robot introduced in the home (blue and red themes).

5.4. Ethical Considerations

The project is in line with the ethical guidelines from the Norwegian Center for Research Data (NSD), project number 58689. The data were encrypted and stored on the Service for Sensitive Data at the University of Oslo, Norway. The participants were informed beforehand about the study and could withdraw from the study at any point without any consequences for them. The participants signed informed consent.

6. Findings

We illustrate in our findings two use-case scenarios represented as two situations: (1) the human work tasks when using an ordinary device; (2) division of work tasks in joint human-robot work activity in the home. We chose to visualize these two situations since Situation 1 does not include automation of work, while Situation 2 does. This approach better emphasizes the tasks that come with automation, in comparison to not illustrating Situation 1. To make this comparison, we defragmented the work performed by the human, in Situation 1, and by the human and robot, in Situation 2, into tasks. The illustration for Situation 1 is based on general experience with an ordinary device, while Situation 2 is anchored within our data collection and analysis. This helps us to better understand the different types of tasks that come along with the automation of work, in a human-robot joint activity, in the home.

6.1. Use-Case Scenario 1: Human Work Tasks When Using an Ordinary Device (Situation 1)

Situation 1 is illustrated by the human (user) using an ordinary device. The navigation area for the ordinary vacuum cleaner is usually decided upon and controlled manually by the user, i.e., the user decides where the vacuum cleaner should clean, and if there are any obstacles in the way, the user will pick those up. In this case, a device is a tool rather than a (semi-)independent actor. Figure 3 illustrates a typical user journey in the vacuum cleaning operation when using an ordinary vacuum cleaner, whose navigation path is decided by the user. As previously mentioned, the visualization is based on general experience with an ordinary device.

6.2. Use-Case Scenario 2: Division of Work Tasks in Joint Human-Robot Work Activity in the Home Situation 2

We divided the work and tasks carried out by each of the actors, e.g., the human and the robot respectively, after analyzing the data through thematic analysis [24]. Moreover, we also classified the tasks carried out in the form of joint action between the human and the robot, through or without using the app to control the robot. We illustrate in this sub-section these different work tasks by supporting them with some examples from our data.

6.2.1. Work Performed by the Human Actor

Based on our data, the human, in each case, seems to need to carry out certain preparatory work to enable the robot to work. Some examples include the fact that the human needs to remove obstacles, to press the robot start button, and stop the robot by pressing the button or through the app. Some examples of removing obstacles from our participants include:
(Participant): I got my brother fixing the cables under the bed, so they are not in its way. […] If it had gotten stuck there, I would not have been able to come down there. I was very afraid of this. So no cables were supposed to be there! I felt then so much better! (Interview, elderly participant).
(Interviewer): Okay … However, you also wrote in your diary notes that you had to clean a bit before you could run the robot.
(Participant): I had to do that more than with an ordinary vacuum cleaner, isn’t it? I have lots of chairs here. I have put those two on top of each other because otherwise, it stops all the time. So I have removed them. Moreover, the cables … I have tried to remove those. Yes, I have cleaned a bit. (Interview, elderly participant).
Another example from one of our participants is from the diary notes written by one of the non-elderly participants, in which the participant explains how he had to remove obstacles, an operation that took up to two hours, before being able to run the robot:
(Participant): Having experienced a couple of weeks with a robot vacuum cleaner at home, I learned that for the vacuum cleaner to do the job without interruptions, the floor needs to ‘be clean’—understood as tidy. Therefore, I set out to unclutter the floor today. I spent about two hours with moving things from the floor and putting the chair upon the table before setting up the Botvac. There is a reason why things end up on the floor—if there is too much stuff about storage capacities on shelves. While putting down the charging station, finding a 220 V outlet, I thought about means and end. The ‘goal’ I had was to make ‘clean floor’; but to get to this—I needed to install something on the floor … A paradox. (Diary notes, non-elderly participant).
Another situation is illustrated when one human chose to use the app to control the robot. When the robot is controlled through an app, and the robot gets stuck, the users have to go to the same place as the robot is, and ‘help’ the robot to do its work:
(Participant): I pressed the ‘home’ button, it started. After a while, it got stuck. I remembered the previous installation at home when the app gave notifications about this—when I was out of the house. This information was disturbing at that time since I did not want to do anything with it. It interrupted a nice train journey I remember now, and started a train of thoughts of where it was stuck, and why (since I had done my best to make a ‘clean floor’ there well. (Diary notes, non-elderly participant)
Other types of tasks are tasks that are usually carried out once, such as installing the robot before running it for the first time, administering its settings, or installing the robot app on the smartphone, if the human had such a mobile phone.

6.2.2. Work Carried out by the Human in Breakdown Situations

The work carried out by the human in breakdown situations points out situations where the human needs to interfere in the robot’s work, to ensure the robot can work. For instance, the participants described situations when the robot started randomly by itself and started cleaning. In such situations, the human often needs to carry the robot back to its base station. Another situation encountered by the participants was when the robot started cleaning by itself during dinnertime. In this situation, the human had to stop the robot and again carry it to its base station. Other situations when the human had to interfere with the robot to work properly included when the robot got stuck, or when the robot ‘escaped’ the boundaries of the home. In these situations, the robot often needed the support of the human to get ‘unstuck’, or to be carried back within the boundaries of the home. Here are some examples from the participants:
(Participant): One time when I pressed on Home, it started going around by itself, so I had to carry it back [meaning back to the charging station]. (Interview)
(Participant): The robot got stuck in the carpet’s tassels and stayed still. It took some time to free R from the tassels, so I took away the carpet. […] Is R made for rooms without carpets and some furniture? (Diary notes, elderly participant)
(Participant): I had to take away the cables a couple of times, and it was trying to take down the lamps. However, I felt that I had to « save » the cables … I had to! I should say. (Diary notes, elderly participant)

6.2.3. Work Performed by the Robot Actor

Our data show that the robots seemed to navigate the environments inconsistently. For instance, the robots followed an incoherent path, going from one room to another, and then coming back to the first room. Moreover, the data also show that the robots seemed to clean the same place over and over again. Furthermore, the robots frequently seemed to get stuck on obstacles, such as cables, including laptop cables, the carpet, and under small tables. Here are some examples from our participants:
(Participant): I think it starts in one room, and then it goes to another, and then it goes again to the first room. I think it is a bit strange that it does not finish in the first room, and it goes perhaps to the kitchen, and then it comes back, and it continues likes this and then goes out again. I think it was very strange (break), really, very strange. (Interview, elderly participant)
(Participant): […] And suddenly it started going by itself one morning, though it was very strange. (Interview, elderly participant).

6.2.4. Division of Work Tasks between the Human and the Robot

Based on the examples that emerged from our data, we compressed the findings into an illustration. Thus, we illustrate in the next diagram (Figure 4) the division of work tasks between the human and the robot. Thus, Situation 2 illustrates the division of work of these two actors. Specifically, Situation 2 is defragmented through applying the customer journey analysis (CJA) and customer journey framework (CJF). For this purpose, we employed visualizations from service design, following [25]. We separated the trajectory of the human and the robot, respectively, and their touchpoints, by using a swim-lane diagram, to offer a clear illustration of the division of work task. As we can observe, in addition to the types of tasks illustrated in Situation 1, it is also possible to notice some deviations on the robot’s side. For instance, some of the examples are the robot starts by itself (D1), the robot cleans the same place over and over again (D2), the robot escapes the room (D3), the robot gets stuck (D4), or the robot does not return by itself to the base station (D5). Each of these robot deviations creates new interventions for the human: stopping the robot, removing obstacles while the robot runs, moving the robot from one place to another (I4), or bringing the robot back to its base station (I5).

6.3. The Proposed Framework on Division of Work Tasks between Humans and Robots

Our findings show that when a robot is introduced into a home, the robot’s trajectory becomes one of its own, and the human actor’s journey changes. In complex structures, the division of labor involves many tasks that may intersect with each other, and form the production lattice [12], p. 210. The tasks are also more intertwined. For instance, if a robot is integrated into the home as part of a larger system, where several actors are part of the same system, the actors’ trajectories would be even more complex than the one we illustrated in Situation 2. However, we argue that our findings, shown through the simple example of using a semi-autonomous vacuum cleaner robot, can easily be understood, even by non-roboticists. Our paper illustrates an example similar to Suchman’s copy-machine [26] and her situated actions. Although at first sight a trivial example, it illustrates well the complexity of the design and how a semi-autonomous robot introduced in the home can change the routines of the people living there. However, our empirical example is slightly different, considering that the robot actor has some autonomy itself, as it can move around, compared to Suchman’s copy-machine, which was a static device. The semi-autonomous vacuum cleaner is also characterized by a multimodal interaction, including movement, feedback as motion [5,27,28], or visual and audio feedback.
Thus, we portray two situations through Figure 3 and Figure 4: (1) human work tasks when using an ordinary vacuum cleaner, and (2) division of work tasks between humans and the robot. The purpose of visualizing the actors’ trajectories was to facilitate unpacking the types of work tasks generated by the introduction or use of a robot in the home.

6.3.1. New Dimensions of Tasks: Temporal and Spatial Distribution

The theoretical framework in this work was initially based on the model presented by Verne [4] and Verne and Bratteteig [5]. The tasks represented by the authors seem to refer to the character of the tasks themselves, about how the work should be carried out, either manually or through automation. The characteristics of these tasks were categorized by the authors as new tasks, residual tasks, automated tasks, redundant tasks, tasks inside the automation, and tasks outside the automation. Moreover, a well-known assumption in CSCW is that ‘work is socially organized and cooperative’ and it requires tacit knowledge about the context and its specific work practice [29]. Much of the cooperative work is about coordinating and negotiating physically and temporally distributed work amongst the actors. Besides the character of the tasks given by the indicated model from the theoretical framework, we also identified other dimensions of tasks. The visualizations from Figure 3 and Figure 4 indicate two new dimensions: temporality and spatial distribution, which we explain next. The new dimensions are then illustrated in Table 4.
  • Temporality of Tasks
For instance, we identified linear tasks, parallel tasks, and sequential tasks. The linear tasks refer to tasks that are done either by the human, or by the robot, and the order in which these tasks are performed. The linear tasks are performed in the same line of work. These tasks are usually performed by either the human or the robot. Examples of this for humans are when the user prepares the built environment before starting the robot (Tu1) when the human starts the robot (Tu2), and when the human presses the stop button, so the robot returns to its base station (Tu5). Other examples of such linear tasks are performed by the robot, such as when the robot starts (Tr1), when the robot runs (Tr2), when the robot returns to its base station (Tr3) and when the robot stops (Tr4). The linear tasks are the desired type of tasks for a smooth flow of the work operations for each of the actors.
Parallel tasks refer to tasks that are carried out by the human and by the robot actors, in parallel, at the same time. One example is when the human needs to undertake work to articulate something, such as removing cables or furniture, while the robot is running.
Sequential tasks refer to tasks that are carried out immediately after one another. However, these tasks can either belong or not to the same line of work. Such examples of sequential human tasks include when: the user prepares the built environment (Tu1), the user needs to charge the robot before being able to run it (I2), the user switches on the robot (Tu2), and when the user presses the home button to stop the robot and return it to its base station (Tu5).
  • Spatial Distribution of Tasks
The spatial distribution of tasks in a shared physical environment is specific and unique for contexts where a robot or a semi-autonomous device is introduced in the home, compared to tasks that are distributed in a virtual environment, like the tasks discussed in the Norwegian automation of the tax system presented in Verne [7,8]. However, as is shown in this study, the robot may exceed the close boundaries of the navigated physical environment. Similar situations were presented in the work from Verne [23] where she illustrated the adaptation of the human to the use of a lawnmower robot.
At the same time, we can also talk about the distribution of tasks that challenges and crosses a geographical space, i.e., a distributed spatiality of tasks. As we have seen in the earlier example, participants are informed about the deviations of the robot (e.g., the robot being stuck) through an app. The physical and geographical location of the human may, in any case, be remote, as shown in the example given in our findings, where a participant chose to run the robot while he was not at home. When the human is required to act upon a request from the robot that was sent via an app (the yellow touchpoint in Figure 4), the human may not be able to act immediately, so we cannot talk about immediacy of an action upon the task.

6.3.2. The Framework

Based on the empirical findings and having our departure point in the theoretical framework from Verne [7] and Verne and Bratteteig [8] applied to the case presented in this paper, we developed the framework further. The new framework addresses semi-autonomous robots, that can move autonomously in space, compared to static interfaces as addressed in Verne [7] and Verne and Bratteteig [8]. To their types of tasks, we have added the temporal and spatial dimensions. Based on the earlier presented examples, we can talk about the relationship between the temporality and spatial distribution of tasks: when the human actor is remote, i.e., does not share the same space with the robot actor, the human cannot interfere and facilitate the robot’s work to ensure its efficacy. Finally, we have represented the framework in Figure 5 below.
To understand the illustration of the framework, we have exemplified it with some mapped examples from our empirical work to the framework (Table 4). However, the framework above can be applied to other types of settings, and other types of robots.

7. Discussion

This section presents first some reflections on the method and the setting of the study and thereafter discusses the proposed framework concerning the division of work between humans and robots and human-robot cooperation.

7.1. Reflections on the Method and Setting

This study adopted a qualitative interpretive approach carried out in a domestic setting. The data collected in this study are from natural environments, e.g., the real homes of our participants. It seems that the home, as a shared physical space and a dynamic environment, creates unpredictable conditions, unforeseen contingencies, and makes room for fluid situations that may occur unexpectedly in a human-robot joint work activity. This paper has investigated the work tasks generated by introducing a semi-autonomous robot in the home and the division of work between the human and the robot actors. The robot employed in this study was a semi-autonomous vacuum cleaner. The purpose of employing such a robot was instrumental, similar to other studies (see [3,4,5,16]). However, many of the previous studies conducted with robots are either carried out in simulation environments [20] or mock-labs. In addition, several studies have argued for more studies of robots ‘in the wild’ [30,31]. Some other studies based their data only on an online survey on the use of robots in the home [14]. While other have studies offered a nice overview of the types of robotic devices used in the home, and the ‘roombarization’ process, these did not allow the researcher to become immersed in the participants’ homes and their complex environments [14]. Compared to previous studies, our naturalistic approach allowed us to immerse ourselves in the homes of the participants, and to be able to illustrate the complexity of a real home, factors that are otherwise hard to extract from a virtual or mock environment [13]. Similar to the findings from the previous studies carried out with robots in the home [4], this study confirms that the current built home environment is not adequate for a moving robot. Finally, the qualitative approach adopted in this study revealed the complexity of a home and many of the human actors’ work tasks that might otherwise be invisible to stakeholders.

7.2. Reflections on the Proposed Framework for Division of Work between Humans and Robots

This paper addresses the following research question: what are the types of work tasks that are generated by a robot, a semi-autonomous moving entity, in the home? This research question was answered by carrying out qualitative research that informed us about different work tasks carried out by the human and by the robot actor. This was exemplified through concrete situations experienced by our participants in their interaction with the robot. This led us to the proposed framework on the division of work between humans and robots (Figure 5). The framework was anchored in the theoretical model proposed by Verne [7] and Verne and Bratteteig [8], and our empirical data. The framework illustrates new dimensions of tasks, such as temporality and spatiality of work tasks. But when is it useful, and what are its design implications for human-robot cooperation? We cover both these aspects next.

7.2.1. When Is the Framework Useful and Relevant?

The framework proposed can be used to analyze the division of work and types of work tasks between a human and a robot. Outside of home settings, it can be applied to, for instance, hospital settings or in studying the Mixed Reality settings for designing robot work tasks. We illustrate these two examples below.
  • Hospital setting scenario: Using the framework to plan the division of work between human and robot.
The study by Oskarsen [32] described automated guided vehicles (AGVs)—robots used in hospital settings for transporting goods and medicines, navigating along specific dedicated magnetic paths. The robots were considered actors in the hospital cooperative ensemble, to automate some of the hospital work. Amongst the key findings from the studies were that pre-, peri- and post-facilitation from the human side should be undertaken to accommodate the robot before, during, and after its navigation. Based on the pre-, peri-, and post-facilitation framework from Soma et al. [18], the study limited itself to the temporal dimension of the work tasks carried out by the human, without discussing in detail the different types of work tasks that the human and the robot have to carry out. For instance, amongst the study’s findings were that the human had to accommodate the robot by performing changes in the navigation environments, but also its organization of the work tasks. Another key finding was that the robots were not designed with cooperative work in mind for a dynamic workplace environment. Moreover, the hospital employed three full-time workers to support the robots’ work and two AGV technicians had to test the robots regularly and check them for technical errors.
Thus, the proposed framework in this study could be applied to this scenario. The benefits of applying the framework would be that the work tasks can be easier identified and classified, based on their type and spatiality, not only temporality in the form of pre-, peri-, and post-facilitation. Once the types of work tasks and their belonging to the human or the robot actor are identified and classified, we can easier try to see which of the human work tasks can be moved to the robot. This implies that the design of the robot should be improved. Slowly, some of the human tasks can eventually be removed.
2.
Mixed Reality remote-controlled robot scenario: Using the framework to plan the division of work between human and robot.
Eve robot is a research robot platform [33] that can potentially be remote-controlled, while it is in the home of the elderly as part of their home-care services, or in a hospital setting. A potential future scenario is that the robot can be remote-controlled through Mixed Reality. Still, in a research phase, the robot can currently be used in Virtual Reality simulation environments to plan and design its work tasks.
Considering the scenario of the use of Eve in the homecare services or a hospital setting, several actors are involved: the care-recipient, formal and informal care-givers, technical staff, and the robot itself. The proposed framework in this paper can support the researchers using Eve as a research platform to plan and design how the work should be automated: which work tasks should be carried out by whom, e.g., the care-recipient, the formal or informal caregiver, the technical staff, or the robot, when, and where.

7.2.2. Design Implications: From Interaction to Cooperation with a Robot

Some human-robot interaction (HRI) researchers have investigated the collaboration between humans and robots (see the work from Hoffman [34]). Collaboration with robots in HRI is seen as performing perhaps what we consider to be small tasks in CSCW, e.g., when a robot brings a cup or transports things from A to B, aiming for joint fluid activity [34].
HRI is currently locked into human-robot interaction studies, whereas CSCW is currently limited to studying cooperative work arrangements between humans while using things. Cooperation per se is understood as a form of co-operating, a joint operation where the entities (individuals or objects) work together towards the same goal, purpose, or effect. In the field of CSCW, we talk mostly about cooperative work via computers, independently of the current or future technology [35], p. 10. Schmidt and Bannon [35] tried, through their work, to set out a framework for the field, which, according to them, ‘should be concerned with the support requirements of cooperative work arrangements’ (p. 7). However, in the early 1990s, when the study [35] was published, technologies, e.g., computers and robots, were considered artifacts that did not have autonomy. At the same time, there are situations where robots can directly or indirectly delegate work tasks to humans, machines can reconfigure themselves, or (chat) bots can delegate tasks to humans when the tasks become too complex to be solved only by the machines, see, for instance, the work presented at CHI’19 by Grudin and Jackques [36]. Grudin [37] also drew attention to this vital debate.
We are, in the end, interested in how to improve the human-robot cooperation to be smoother, without generating residual, redundant, or tasks outside of the automation, or completely new work tasks for the human. The idea of designing robots to be used within our homes, or for that matter, outside of it, is to automate the human work. Inevitably, some of the new tasks will be generated, some of them being carried out by the human, while others are carried out by the robot. However, the purpose of having a robot doing human work is to decrease the amount of work earlier carried out by the human and to free up time for the human to carry out other tasks of his or her choice while the robot is carrying out its work. However, this is not the case, as this study also proved: the human often needs to carry out residual, redundant, or tasks outside the automation, and even new tasks to make the robot’s work possible. While Ajoudani et al. [22] argued that the robot’s hardware components often limit the types of interaction and the level of intelligence of a robot, they also argued that the intermediate interfaces between the human and the robot and the control or interaction modalities also play a role. The authors argue feed-forward and feedback communication can address well even complex interaction scenarios. However, neither the robot’s feedback in the form of audio, visual, or motion feedback nor the robot’s feed-forward was successfully designed, as the journeys of both of the actors were often interrupted by the robot’s deviations. This leads us back to a discussion around the design of robots. As Suchman [26] pointed out, “the goal of the design is that the artifact should be self-evident; therefore the problem of deciphering an artifact defines the problem of the designer as well” (pp. 14–15). How can we design, then, for cooperation with a robot?
While some attempts to describe human-supported robot work and robot-supported cooperative work are discussed in several studies [32,38,39], this question remains unanswered. Although this will be the case for a while from now on, we have proposed the framework on the division of work tasks between humans and robots. We hope that this can bring us a step further in our design of robot work tasks, and as we proved through the illustrated examples and this empirical study, the framework can be useful and relevant for the planned work tasks and design of robots.

8. Summary and Conclusions

In this paper, we have analyzed the division of work of a home work task, e.g., cleaning carried out by two actors: a human and a semi-autonomous robot. However, our main concern was to consider if and how robots can be integrated within the home, and which work tasks accompany the automation of work. The paper was grounded in the concept of tasks as defined in the existing CSCW existing literature and in the model presented by Verne [4] and Verne and Bratteteig [5], covering: residual tasks, redundant tasks, tasks within the automation, tasks outside the automation, tasks generated by the automation, and new tasks. Analyzing the concept of work tasks and work division between humans and robots through the lens of CSCW helped us to better understand the potential challenges that may arise with the introduction of a robot in the home. The research question that guided the paper was: what are the types of work tasks that are generated by a robot, a semi-autonomous moving entity, in the home? We analyzed the types of tasks carried out by the human and the robot, respectively. As a result of this work, we proposed a framework on the division of work between humans and robots. The framework resulted from the current literature, from an existing theoretical model, and our empirical findings. The framework includes new dimensions of work tasks, such as temporal and spatial dimensions. These two dimensions exceed the boundaries of a desktop system and are relevant and useful when talking about interaction and cooperation with a robot, in a shared or distributed physical space. Specifically, the framework is relevant for identifying, understanding, planning, visualizing, and designing work tasks in a human-robot division of work setting.

9. Future Work

Finally, for future work, two interesting areas are relevant: focusing on the invisible work of the human and on different degrees of automation. First, it would be interesting to focus solely on the work performed by the human and explore it through the analytical CSCW concept of invisible work, including routine and non-routine work. Second, it would also be interesting to explore Cummings’ [40] 10 levels of automation, and more recent work on this topic, concerning new forms of human-robot automation. For instance, level (1) can be described as the computer not offering any assistance, and the human needing to take all the decisions, whereas, at level (5), the computers act only if the human approves. The highest level, level (10), is described as the computer making its own decisions and acting autonomously, ignoring the human. One could also explore different degrees of these levels of automation. For instance, new machines can intelligently reconfigure themselves with the help of AI algorithms, such as a robot or (chat) bots that can delegate tasks to humans [36]. We may consider the latter as being on a higher level than level (10) of automation, while we can reasonably categorize the robot vacuum cleaner at a lower level than level (10) of automation. While these degrees of the level of automation are not discussed in the existing research, placing different machines on such a continuum of automation levels, describing different degrees of it, might be helpful to also be able to talk about degrees of cooperation with these machines. This could be a potential area of interest for future work.

Author Contributions

Conceptualization, D.S.; methodology, D.S.; validation, D.S.; formal analysis, D.S., J.H.; investigation, D.S., J.H.; resources, D.S., J.H., and J.T.; data curation, D.S.; writing—original draft preparation, D.S.; writing—review and editing, D.S., with comments from J.T., J.H., Z.P.; visualization, D.S.; supervision, J.H., and Z.P.; project administration, D.S., J.T., J.H.; funding acquisition, J.T. and J.H. All authors approved the final version of this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Research Council of Norway (RCN), IKTPLUSS Program, grant number 247697, project name Multimodal Elderly Care Systems (MECS).

Acknowledgments

My thanks go to the Research Council of Norway (RCN) IKTPLUSS Program for funding this project (Project Grant Agreement no. 247697), to colleagues, and the participants. Finally, our thanks go to the MTI reviewers and editors for taking from their time to read drafts of the paper and give constructive comments and advice on how to improve it.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baltes, P.B.; Smith, J. New frontiers in the future of aging: From successful aging of the young old to the dilemmas of the fourth age. Gerontology 2003, 49, 123–135. [Google Scholar] [CrossRef] [PubMed]
  2. Field, D.; Minkler, M. Continuity and change in social support between young-old and old-old or very-old age. J. Gerontol. 1988, 43, 100–106. [Google Scholar] [CrossRef] [PubMed]
  3. Forlizzi, J.; DiSalvo, C. Service Robots in the Domestic Environment: A Study of the Roomba Vacuum in the Home. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, New York, NY, USA, 2–4 March 2006; pp. 258–265. [Google Scholar] [CrossRef]
  4. Forlizzi, J. Product Ecologies: Understanding the Context of Use Surrounding Products. Ph.D. Thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA, 2007. [Google Scholar]
  5. Saplacan, D.; Herstad, J. An Explorative Study on Motion as Feedback: Using Semi-Autonomous Robots in Domestic Settings. Int. J. Adv. Softw. 2019, 12, 23. [Google Scholar]
  6. Saplacan, D.; Herstad, J.; Pajalic, Z. An analysis of independent living elderly’s views on robots-A descriptive study from the Norwegian context. In Proceedings of the International Conference on Advances in Computer-Human Interactions (ACHI), IARIA Conferences, Valencia, Spain, 21–25 November 2020. [Google Scholar]
  7. Verne, G. “The Winners Are Those Who Have Used the Old Paper Form”. On Citizens and Automated Public Services; University of Oslo: Oslo, Norway, 2015. [Google Scholar]
  8. Verne, G.; Bratteteig, T. Do-it-yourself services and work-like chores: On civic duties and digital public services. Pers. Ubiquitous Comput. 2016, 20, 517–532. [Google Scholar] [CrossRef]
  9. Strauss, A. Work and the Division of Labor. Sociol. Q. 1985, 26, 1–19. [Google Scholar] [CrossRef]
  10. Carstensen, P.H.; Sørensen, C. From the social to the systematic. Comput. Support. Coop. Work CSCW 1996, 5, 387–413. [Google Scholar] [CrossRef]
  11. Oxford English Dictionary, actor, n. OED Online; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  12. Gasser, L. The Integration of Computing and Routine Work. ACM Trans. Inf. Syst. 1986, 4, 205–225. [Google Scholar] [CrossRef]
  13. Cha, E.; Forlizzi, J.; Srinivasa, S.S. Robots in the Home: Qualitative and Quantitative Insights into Kitchen Organization. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 2–5 March 2015; pp. 319–326. [Google Scholar] [CrossRef]
  14. Sung, J.-Y.; Grinter, R.E.; Christensen, H.I.; Guo, L. Housewives or technophiles? Understanding domestic robot owners. In Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction, Amsterdam, The Netherlands, 12–15 March 2008; pp. 129–136. [Google Scholar] [CrossRef]
  15. Pantofaru, C.; Takayama, L.; Foote, T.; Soto, B. Exploring the Role of Robots in Home Organization. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 5–8 March 2012; pp. 327–334. [Google Scholar] [CrossRef]
  16. Forlizzi, J. How Robotic Products Become Social Products: An Ethnographic Study of Cleaning in the Home. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 8–11 March 2007; pp. 129–136. [Google Scholar] [CrossRef]
  17. Sung, J.; Grinter, R.E.; Christensen, H.I. Domestic Robot Ecology. Int. J. Soc. Robot. 2010, 2, 417–429. [Google Scholar] [CrossRef]
  18. Soma, R.; Søyseth, V.D.; Søyland, M.; Schulz, T.W. Facilitating Robots at Home: A Framework for Understanding Robot Facilitation. In Proceedings of the ACHI 2018: The Eleventh International Conference on Advances in Computer-Human Interactions, Rome, Italy, 25–29 March 2018; pp. 1–6. Available online: https://www.thinkmind.org/index.php?view=article&articleid=achi_2018_1_10_20085 (accessed on 6 March 2019).
  19. Teaming Up with Robots: An IMOI (Inputs-Mediators-Outputs-Inputs) Framework of Human-Robot Teamwork. Available online: https://deepblue.lib.umich.edu/handle/2027.42/138192 (accessed on 1 July 2020).
  20. Ijtsma, M.; Ye, S.; Feigh, K.; Pritchett, A. Simulating Human-Robot Teamwork Dynamics for Evaluation of Work Strategies in Human-Robot Teams. In Proceedings of the 20th International Symposium on Aviation Psychology, Dayton, OH, USA, 7–10 May 2019; pp. 103–108. [Google Scholar]
  21. Lee, C.P.; Paine, D. From The Matrix to a Model of Coordinated Action (MoCA): A Conceptual Framework of and for CSCW. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 179–194. [Google Scholar] [CrossRef] [Green Version]
  22. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2018, 42, 957–975. [Google Scholar] [CrossRef] [Green Version]
  23. Verne, G.B. Adapting to a Robot: Adapting Gardening and the Garden to fit a Robot Lawn Mower. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 34–42. [Google Scholar] [CrossRef] [Green Version]
  24. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef] [Green Version]
  25. Halvorsrud, R.; Kvale, K.; Følstad, A. Improving service quality through customer journey analysis. J. Serv. Theory Pract. 2016, 26, 840–867. [Google Scholar] [CrossRef]
  26. Suchman, L. Plans and Situated Actions: The Problem of Human-Machine Communication; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  27. Saplacan, D.; Herstad, J. A Quadratic Anthropocentric Perspective on Feedback-Using Proxemics as a Framework. In Proceedings of the British HCI 2017, Sunderland, UK, 3 July 2017; Available online: http://hci2017.bcs.org/wp-content/uploads/46.pdf (accessed on 19 July 2017).
  28. Saplacan, D.; Herstad, J. Understanding robot motion in domestic settings. In Proceedings of the 9th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, IEEE XPlore, Oslo, Norway, 19–22 August 2019. [Google Scholar] [CrossRef]
  29. Halverson, C.A.; Ellis, J.B.; Danis, C.; Kellogg, W.A. Designing Task Visualizations to Support the Coordination of Work in Software Development. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work, New York, NY, USA, 4–8 November 2006; pp. 39–48. [Google Scholar] [CrossRef] [Green Version]
  30. Jung, M.; Hinds, P. Robots in the Wild: A Time for More Robust Theories of Human-Robot Interaction. ACM Trans. Hum. Robot Interact. 2018, 7, 2:1–2:5. [Google Scholar] [CrossRef] [Green Version]
  31. Sung, J.-Y.; Christensen, H.I.; Grinter, R. Robots in the wild: Understanding long-term use. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, San Diego, CA, USA, 11–13 March 2009; pp. 45–52. [Google Scholar] [CrossRef]
  32. Oskarsen, J.S. Human-Supported Robot Work. Master’s Thesis, Department of Informatics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway, 2018. [Google Scholar]
  33. Halodi Robotics. Available online: https://www.halodi.com (accessed on 12 July 2020).
  34. Hoffman, G. Designing Fluent Human-Robot Collaboration. In Proceedings of the 3rd International Conference on Human-Agent Interaction, Daegu, Korea, 21–24 October 2015; p. 1. [Google Scholar] [CrossRef]
  35. Schmidt, K.; Bannon, L. Taking CSCW seriously. Comput. Support. Coop. Work CSCW 1992, 1, 7–40. [Google Scholar] [CrossRef]
  36. Grudin, J.; Jacques, R. Chatbots, Humbots, and the Quest for Artificial General Intelligence. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 209:1–209:11. [Google Scholar] [CrossRef]
  37. Grudin, J. From Tool to Partner: The Evolution of Human-Computer Interaction. In Proceedings of the Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. C15:1–C15:3. [Google Scholar] [CrossRef]
  38. Kaindl, H.; Popp, R.; Raneburger, D.; Ertl, D.; Falb, J.; Szép, A.; Bogdan, C. Robot-Supported Cooperative Work: A Shared-Shopping Scenario. In Proceedings of the 2011 44th Hawaii International Conference on System Sciences, 4–7 January 2011; pp. 1–10. [Google Scholar] [CrossRef]
  39. Yamazaki, K.; Kawashima, M.; Kuno, Y.; Akiya, N.; Burdelski, M.; Yamazaki, A.; Kuzuoka, H. Prior-To-Request and Request Behaviors within Elderly Day Care: Implications for Developing Service Robots for Use in Multiparty Settings; Springer: London, UK, 2007. [Google Scholar]
  40. Cummings, M. Automation Bias in Intelligent Time Critical Decision Support Systems. In Proceedings of the AIAA 1st Intelligent Systems Technical Conference, American Institute of Aeronautics and Astronautics, Chicago, IL, USA, 20–22 September 2004. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Photo from the data analysis—collating codes into sub-categories.
Figure 1. Photo from the data analysis—collating codes into sub-categories.
Mti 04 00044 g001
Figure 2. Final themes emerged from the data analysis.
Figure 2. Final themes emerged from the data analysis.
Mti 04 00044 g002
Figure 3. User journey when using an ordinary device (Situation 1).
Figure 3. User journey when using an ordinary device (Situation 1).
Mti 04 00044 g003
Figure 4. Division of work between human and robot in a vacuum cleaning work activity (Situation 2).
Figure 4. Division of work between human and robot in a vacuum cleaning work activity (Situation 2).
Mti 04 00044 g004
Figure 5. Framework: division of work tasks between humans and robots.
Figure 5. Framework: division of work tasks between humans and robots.
Mti 04 00044 g005
Table 1. Types of tasks (based on Verne, 2015 [7]; Verne and Bratteteig, 2016 [8]).
Table 1. Types of tasks (based on Verne, 2015 [7]; Verne and Bratteteig, 2016 [8]).
Types of TaskTask Description
New tasksTasks that arise as a result of automation cannot be performed by the users themselves. These tasks usually occur when errors or inconsistencies are encountered.
Residual tasksTasks that still need to be performed outside the automation, usually manual tasks.
Automated tasksTasks that are automated.
Redundant tasksTasks that can be done both through automation and manually.
Tasks inside the automationTasks generated with the automation that is inside the automation.
Tasks outside the automationTasks generated by automation, but which are outside it.
Table 2. Overview of the data collected from non-elders [5].
Table 2. Overview of the data collected from non-elders [5].
Data Collection Methods—Non-Elders
#TimeframeDocumentation Robot Used
1One weekYes. Diary notes, seven posts (one per day), ca. 4 and a half A4 pages, analog format, 28 photosNeato
2Ca. two weekYes. Three pages of A4 notes, digital format, 4 photos enclosedNeato
3Ca. one weekYes. Short notes on strengths and weaknesses of using such a robot, digital formatiRobot Roomba
4One weekYes. one page of notes, digital format Samsung PowerBot
5Ca. one weekYes. Half-page was written notes on strengths and weaknesses, digital formatNeato
6Ca. one monthYes. Four pages of written notes, 22 posts, digital formatNeato
7Ca. one monthYes. Ca. 19 A4 pages of written notes, analog formatNeato
Table 3. Overview of the data collected from the elders [5].
Table 3. Overview of the data collected from the elders [5].
#Data Collection Methods—Elderly
Gender
(Female F, Male M)
InterviewElderly’s Diary NotesAuthor’s Notes (SD)Photos Were Taken by the ResearchersEventual Details about the Robot Used, If Any Assistive Technologies Were Used, and Level of Information Technology Literacy
1FCa. 1 h, audio-recorded pilot interview transcribed verbatim (SD)
AND
Ca. 1 h and 45 min of untranscribed audio-recording from the installation of the robot
Yes. Ca. 5 A4 pages, analog format.Yes. Ca. 2 A4 pages.Yes, 36 photosiRoomba, 87 years old, walking chair, did not use the app
2FCa. 40 min, audio-recorded, transcribed verbatim (SD)Yes. Ca. 3 A4 pages notes, analog formatYes. Ca. 2 A4 pages.Yes, 4 photos.iRoomba, walking chair, a necklace alarm that she does not wear it, high interest in technology, used the app, has a smartphone.
3MCa. 25 min, audio-recorded, transcribed verbatim (SD)Yes. One letter-size page, analog format, short notes.Yes. Ca. 4 letter-sized pages.Yes, 10 photos.Neato, wheelchair, not interested in technology, did not use the app, has a wearable safety alarm
4FCa. 33 min audio-recorded, transcribed verbatim (SD)Yes. One A4 page, analog formatYes. Ca. 2 A4 pages.Yes, 36 photosiRoomba, wheelchair, interested in technology, did not use the app, does not have a smartphone, has a wearable safety alarm
5FCa. 45 min audio-recorded, transcribed verbatim (SD)Yes. One letter-size page, analog format.Not availableYes, 13 photosWalker did not use the app, not interested in technology, does not have a smartphone, has a wearable safety alarm
6FCa. 43 min, audio-recorded, (transcribed verbatim) (SD)Yes. 4 letter-size pages, analog format.Yes. Ca. 1 letter-sized page.Yes, 16 photosInterested in technology, no walker, wanted to use the app, but gave up, does not have any safety wearable alarm
Table 4. Exemplified framework on the division of work tasks between humans and robots: including the spatial and temporal dimensions.
Table 4. Exemplified framework on the division of work tasks between humans and robots: including the spatial and temporal dimensions.
Tasks DimensionsType of TaskWhen the Human Actor Is Using a Non-Moving Actor
(N/A = Not Available)
When a Robot Is Introduced in a Physical Environment
Tasks that come with automation (based on Verne, 2015, and Verne and Bratteteig, 2016)Residual tasksYes. Humans need to do some manual work tasksYes. The human needs to clean some of the areas that the robot did not reach.
Redundant tasksN/AYes. The human needs to start the robot through direct (e.g., by pushing the button) or remote (e.g., through the app) interaction.
Tasks within the automationN/AYes. The robot gives audio or visual feedback to the human.
Tasks outside the automation and new tasksYes.Yes. The human chooses to move the robot, or to remove obstacles without the robot indicating it.
Tasks generated with the automation and new tasksN/AYes. The human needs to charge the robot, to lift the robot from one place to another, when it gets stuck, to bring it back when it “escapes”.
Temporality of tasksSequentialYes.Yes, partially. Some sequential tasks, for each of the actors, are available. When the tasks for one actor is interrupted or paused, usually the other actor takes on the tasks.
ParallelNo. The device itself cannot perform tasks on its own.
However, the human can perform several tasks at the same time.
Yes. The human and the robot can perform tasks in parallel.
LinearYes. The device is controlled by humans.Yes. Both the human and the robot can perform linear tasks. However, linear tasks are often interrupted.
Spatiality of tasksSpatial tasks in shared spatialityYes. The human and the device share the space. Yes. Both of the actors can share space and perform different tasks at the same time.
Spatial tasks in distributed spatialityNo. The human and the device cannot be in two different places and work on a joint taskYes. The robot can perform tasks remotely, while the human can control or give autonomy to the robot through an app, that can be used remotely.

Share and Cite

MDPI and ACS Style

Saplacan, D.; Herstad, J.; Tørresen, J.; Pajalic, Z. A Framework on Division of Work Tasks between Humans and Robots in the Home. Multimodal Technol. Interact. 2020, 4, 44. https://doi.org/10.3390/mti4030044

AMA Style

Saplacan D, Herstad J, Tørresen J, Pajalic Z. A Framework on Division of Work Tasks between Humans and Robots in the Home. Multimodal Technologies and Interaction. 2020; 4(3):44. https://doi.org/10.3390/mti4030044

Chicago/Turabian Style

Saplacan, Diana, Jo Herstad, Jim Tørresen, and Zada Pajalic. 2020. "A Framework on Division of Work Tasks between Humans and Robots in the Home" Multimodal Technologies and Interaction 4, no. 3: 44. https://doi.org/10.3390/mti4030044

Article Metrics

Back to TopTop