Next Article in Journal
An Efficient Robust Multiple Watermarking Algorithm for Vector Geographic Data
Next Article in Special Issue
Editorial for the Special Issue on “ROBOETHICS”
Previous Article in Journal
A Compression-Based Toolkit for Modelling and Processing Natural Language Text
Previous Article in Special Issue
Two New Philosophical Problems for Robo-Ethics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can Social Robots Make Societies More Human?

by
João Silva Sequeira
Instituto Superior Técnico/Institute for Systems and Robotics, Av. Rovisco Pais 1, 1049-001 Lisbon, Portugal
Information 2018, 9(12), 295; https://doi.org/10.3390/info9120295
Submission received: 12 September 2018 / Revised: 8 November 2018 / Accepted: 15 November 2018 / Published: 22 November 2018
(This article belongs to the Special Issue ROBOETHICS)

Abstract

:
A major criticism social robots often face is that their integration in real social, human environments will dehumanize some of the roles currently being played by the human agents. This implicitly overestimates the social skills of the robots, which are constantly being upgraded, but which are still far from being able to overshadow humans. Moreover, it reflects loosely rational fears that robots may overcome humans in the near future. This paper points to a direction opposite to mainstream, and claims that robots can induce humanizing feelings in humans. In fact, current technological limitations can be managed to induce a perception of social fragility that may lead human agents to reason about the social condition of a robot. Though robot and/or technology phobias may bias the way a social robot is perceived, this reasoning process may contribute to an introspection on the meaning of being social and, potentially, to contribute to humanizing social environments.

1. Introduction

Nowadays, a common topic of debate is that robots, social and asocial, will dehumanize societies. With the demography in many developing countries showing the ageing of societies, multiple activities require increasing levels of automatization. In manufacturing workplaces the need to improve the robustness and efficiency of the workforce leads to the use of robots [1], sometimes with the argument that it will reduce workplace fatigue (see [2,3,4]). In fact, many workplaces, other than manufacturing, operate based in sequences of routines and, hence, humans stress levels tend to build up naturally. Examples of such professions are call center operators and sales persons. The repeated sales pitches, with the pressure for succeeding in selling and the potential poor interactions with the prospective customers, are stress factors (see [5]). These are jobs/tasks including stress-generating routines and, hence, naturallygood candidates to a replacement with automated/autonomous/intelligent strategies. A key question is if such stressing routines, which include recognizing and generating emotions and evaluating, on the fly, the information being exchanged between robot and human (e.g., for hidden meanings), can be implemented with currently available technologies.
At home, the caring of elderly and people with disabilities already led to a whole new area of research in robotics (the literature and media coverage are extensive, see, for example, [6,7,8,9]). In professional healthcare, robots are being used to help caregivers, remind patients to take their medications, help carry people (see the Robear [10], p. 30), or simply act as companions (see the Paro robot [11]).
With the expanding of caregiving applications, some authors point to ethical dangers, namely in elderly care [12,13], and to a necessity of humanizing healthcare in general [14]. These fears indirectly assign a human quality to social robots, and seem biased as, for example, similar concerns can be made against smartphones (essentially, in what concerns reasoning and decision-making processes, there is no difference between a social robot and a smartphone running software embedding human personality traits and verbal communication skills). Thus, the aesthetics and the ability to move, which is intrinsically to robots, seems to play a part in these concerns. While having a smart watch/phone remembering daily tasks and/or making suggestions tends to be seen as a “product” utility, the same functionality equipped with autonomous motion skills tends to be seen differently.
In addition to ethical fears/dangers, from a social robotics perspective, caring is an activity that poses essentially the same problems as the activities in the context of call centers or selling goods: the interaction between humans and the autonomous agents/robots must emulate the interactions among humans. However, though there is a consensus that caring represents an opportunity to help people [15] (and, simultaneously, generate business opportunities), it is clear that there is a long way to go before humans feel comfortable having machines taking care of their beloved ones. Apart from the ethical issues, namely, the future dystopia claimed in [13], as of 2009, market analysts recognized that most robot systems were still in the research phase [16]. As of 2017, the robots already in the market are being described as responding stock phrases and unable to elaborate a conversation [17].
However, the gap between social robots and humans is likely to decrease rapidly, given the huge resources currently involved, and the recent developments in related technologies, e.g., learning. One may conjecture an exponential decrease—the interest in human-robot interaction and related issues has been referred to as increasing exponentially (see, for example [18,19]), which, assuming an optimistic view of R&D time-to-market, supports the conjecture. As it is happening with other autonomous/intelligent technologies, e.g., autonomous cars, as the distance approaches zero, the difficulty of the scientific/engineering problems scales up (see, for instance, the comments by E. Musk in IEEE Spectrum [20], which seem to have been confirmed by the current reality). Still, one of the factors being referred to as the cause for the discrepancy between the amount of research resources and the commercial output has been the field testing at a limited scale [16]. In recent years, multiple projects embraced field testing (see the European projects Monarch and Robot-Era, and the numerous projects and commercial usage using the Pepper (see Figure 1) and NAO robots from Softbank Robotics). However, the aforementioned comment in [17] still applies.
Explicit and implicit forms of communication are still a major issue in social robotics. Though explicit communication, using voice and/or haptic devices, gets the majority of attention from the research community, implicit forms, such as the movement, are also extremely important to convey intentions and/or hidden meanings. In fact, if a robot changes its pose in a way that matches that of the humans in a social environment, it is likely to have more chances of achieving some form of social mingling/integration, as it conveys a primary perception of liveness (see [21]) and predictability.
Unsurprisingly, robots have been considered as instigating all sorts of fears in people. The issue of predictability arises immediately once a human is facing a robot. Even though a certain dose of skepticism by people regarding new technologies may be considered as natural, social robots have been “credited” (mainly by the media) with skills far beyond realistic expectations, both in what concerns static (not involving movement) and dynamic (involving movement) interactions.
In addition, the role of the media, nudging people to (sometimes) extreme emotional states of mind (as caused by the fear induced by a hypothetic social domination by robots), prevents a proper perception of the advantages of social robots. It seems a question worth some analysis if (generically) media ethics related to robotics is beneficial to society in general.
Research is frequently oriented towards having robots mimicking human skills (see, for instance, [22] and the claim, therein, that robots need to adapt to human environments). Though this may come naturally from the success of humans as a living species, forcing a mutual understanding, i.e., developing robots targeting specific needs of humans, may hinder the full potential of social robotics.
The paper starts by discussing human fears and prejudice, and the slow acceptance of some technologies by societies that allows them reaching stable states. Along the paper, intermediate conclusions are collected from the different knowledge areas considered. Evidence from out-lab experiments is presented. This evidence was obtained spontaneously, i.e., the experiments where not scheduled in advance as to test a specific thesis. Instead, the experiments aimed at simply gathering information on interesting situations arising when the robot is running in fully autonomous mode, and no expert/technical help is present in the environment.
The main claim of the paper is that social robots can be used to smooth out human behaviors, hence preventing dehumanization, just by showing humans the frailties of their behaviors and how difficult it is to mimic them. This idea of smoothing represents, in fact, the adaptation to different social environmental conditions—created by the introduction of robots as de facto living social entities. The increase in the complexity of interactions by small increments—complex/elaborate interactions, may trigger fears of hidden meanings and, hence, an embedded sense of stability may suggest/lead to a refusal of social interactions with robots—does not aim, specifically, at improving the perception of the humans. However, it is not difficult to accept that the simple presence of a robot in a social environment may expand the way the environment is perceived by the corresponding native humans (an analogy with the Tetris example/argument in [23] is possible—the robot can be identified with a Tetris falling piece that is manipulated by a player—the humans at the scene—in order to fit into the environment).
Moreover, social robots have the potential to nudge people to search for upgraded education levels, in order to understand the complexity of integrating a robot in a social environment. Social robots, namely of average skills, can be configured as communication facilitators/enablers that can change behaviors. Nudges are already being seen as simple solutions for difficult problems (e.g., as an alternative to legislation/taxation [24], p. 9). People having some inclination/curiosity about technology may find, in the interaction with robots, a good reason to expand their knowledge. Nevertheless, it does not seem plausible that social robots be a mass solution to increase the efficiency of educational processes.

2. Fearing the Unknown?

Fearing the unknown is being considered as a fundamental fear and a key influencer of the neurotic personality trait (see [25], p. 14). The extensive visibility of robots provided by the media essentially provide expectations about robots, but the innate perception of the complexity of human beings is likely to set the level of knowledge close to unknown.
However, the lack of knowledge can still generate some expectations, namely, in what concerns motion. The Tweenbots experiment (www.tweenbots.com) is paradigmatic on the empathy that people may develop towards (i) a non-living and (ii) highly predictable entity—as the package induces people to not to expect any autonomous motion.
In what concerns empathy towards humans and robots, [26] show that, to some extent, the mechanisms controlling the empathy are similar, i.e., the same areas of the brain are activated when a human observes another human or a robot being mistreated, thought the strength is significantly higher in the human situation.
Current commercial output presence of social robots is rapidly increasing. Around 10,000 Pepper units are already in operation, in usages such as hosting, as shown in Figure 1, requiring empathy- and emotion-recognition abilities. Still, the perception conveyed to people is “robot’s spoken responses feel canned, instead of being synthesized in real-time. For now, Pepper can only say some stock phrases …” [17]. However, with such numbers already “living” in social environments, it may happen that people are starting to bring their expectations to a realistic level.
The case of Sanbot (en.sanbot.com), with around 60,000 units already operating in China (see [27]), is also interesting. Allegedly, IBM Watson conversational skills and Amazon Alexa voice recognition should lead to minimal refusal or, possibly, high acceptance of the robot in a generic social environment.
However, when operating in the diversity of cultural backgrounds, often existing in complex social environments, it may happen that some people do not appreciate interacting with a robot because, if it is too smart, it may be difficult to trust, as there may be not a quick way of knowing what it is capable of.
Thus, a social robot must exhibit its intelligence convincingly but also in a careful form, convincing people that there are no hidden meanings/feelings/intentions. Establishing trust is thus a key issue. Moreover, as with humans, accepting errors plays an important part in it; that is, intentional and non-intentional errors require careful management, to help convey an adequate notion of trust.
In addition to the fear of the physical actions that social robots can take, the fact that they can record image and sound triggers, immediately, fears of privacy violation (and, in some situations, legal issues may be at stake). Moreover, even if the hardware and software making the robots can be guaranteed to be safe, it is difficult to ensure that no third party installs a device onboard (e.g., a micro-camera or sound recorder) that operates independently of the robot (which, then, is simply acting as a transporter vehicle for the spying devices).
Of, course, similar fears should already exist relative to “smart” technologies. However, creative nudging techniques have been often used to overcome this (e.g., emails and webpages remembering people to check for specific events, pre-installed software apps of specific manufacturers, and catchy interface designs, that are non-intrusive and, at times, useful), so it is reasonable to assume that similar techniques can help removing many fears regarding robots. Nudging is known to be a powerful, cost-effective, manipulation technique (see [28]). Moreover, designing digital nudges is becoming a structured process that can be applied also in the robotics domain (see for instance [29]).

3. Slow Integration to Beat a “Prejudice” against Robots

The main claim of the paper is supported on a number of observations of social robots in real environments (note that adjectives such as “real” or “realistic” to qualify environments are often used with very different hidden meanings—in this paper, a “real environment” is an environment that is not controlled in a laboratory sense), where empathic, even friendly, behaviors from people towards robots can be observed, often conveying positive perceptions. Whether empathy is simply an expression of curiosity, or due to some other factor, is difficult to assess.
Often, such empathic behaviors show a joyful, theatrical, side of human personality that otherwise would not emerge. In some environments, this theatrical skill may often lighten the emotional stress. Since hope and optimism, which can be brought up from a positive environment, are known to have positive effects on physical and mental health (see, for instance, [30,31]), positive effects induced by social robots can be expected.
Though the aforementioned dehumanization criticism is only natural, reflecting a common fear, even prejudice, from societies that disruptive technologies may bring uncomfortable situations to humans, we argue that a slow integration is the key to success. As empirical as this argument may seem, basic math models seem to support it (see [32]). The rationale behind this argument is that, by allowing systems to stabilize themselves, any subsequent change is likely to produce a smaller disturbance.
An indirect argument also supporting social robotics is that avoiding dehumanizing the work place is currently assumed as an objective of academics (see [33]), entrepreneurs/corporations (see [34]), and legislators (see [35]), as it affects the health of workers and their productivity. Therefore, social robots can play a role by replacing humans in dehumanizing activities. However, this was a known argument dating back to the early days of robotics, about the advantages of automation and robots. It is thus quasi-paradoxical that as robots evolve, they stand accused of dehumanizing environments, as in the case of caregiving.
Augmenting devices with robotic limbs, as the mobile phone with augmented with a robotic finger in [36], seems an intermediate step in embedding motion autonomy in complex reasoning/decision making/interfacing processes. The acceptance of such devices has yet to be assessed, but it is likely that given the reduced motion skills, ethical arguments are smoothed out when compared with the stereotypical idea of a social robot (a humanoid-like device with autonomous motion and social skills). Launching these devices to public usage may even have a positive nudging effect on the future acceptance of more complex social robots.

4. Humanizing Robotics Technology

Caring for others is a common behavior in humans, though some authors refer to it as a need (see for instance [37]) and, even, as a basis dimension of the human being (see the comment on Heidegger’s work in [38], p. 14). Caring for a robot, a non-living agent (in the biological sense), may require either additional social skills, namely, egalitarian moral for humans and machines, or an understanding of technology possibilities/limitations combined with personality traits leading to the desire of having it running flawlessly.
Technology limitations tend to act in favor of humanization, as they convey a perception of weakness/frailty that may trigger sympathetic feelings. This is precisely a key finding in some ongoing experiments at IST (Instituto Superior Técnico/Institute for Systems and Robotics, Lisbon, Portugal).
The type of social robotic experiments considered in the paper often raise practical issues in what concerns the scientifically valid collection of data. Regular experiments rely (i) on direct observation and recording—in general, constrained due to privacy regulations which, when duly addressed easily bias the results—and (ii) on post-processing analysis, i.e., feature extraction pointing to concepts that are often difficult to disambiguate, e.g., emotions and subsequent statistical processing. In the case of purely observational, non-scheduled, experiments, it is necessary either to wait until any relevant events are triggered (which may require a long time), or to sample the environment with short-time experiments. The rate of events may thus be small, as the case of those reported in this paper which tends to further increase the time necessary for assertive conclusions to be drawn.
Figure 2, Figure 3 and Figure 4 show a social experiment with a “mbot” robot (www.monarch-fp7.eu) in a non-lab environment at IST. The experiment aimed at verifying the reaction of ordinary people towards a robot equipped with very basic interaction skills. The space was a lobby of a university building. The audience was formed mainly of engineering students (of heterogeneous curricula), and people without a technological background, namely, administrative and maintenance staff.
The social skills of the robot were limited to (i) asking people for a handshake whenever someone is detected in front of the robot while it is waiting, and (ii) asking people to follow the robot when navigating between a collection of locations in the environment.
The robot was fully autonomous, and the people managing the experiment stayed out of sight during the entire duration, and did not intervene in any way. The experiment duration was constrained only by the duration of the batteries (approximately 4 h).
This particular type of robot does not usually operate in this type of open public area. Its physical shape is known to most people, but not its social skills, though there is a general understanding that it is a programmable device. Some people knew that it was used in the Robotics labs located in the same building.
People passing by naturally looked at the robot. The photos show common situations observed during the experiment, with people spontaneously initiating an interaction (trying to shake hands and following) with the robot.
People willing to try to interact with the robot are natural candidates to exhibit more theatrical behaviors, as shown in Figure 4.
The effect of the people’s willingness to interact with the robot and such theatrical behaviors, namely expressions of affection, have the practical effect of humanizing the environment with a side effect that bystanders seeing people playing with the robot get a perception of friendship, or, at least, non-dangerousness from the robot. People naturally tend to find the most out of a situation and attribute human qualities to artificial entities (and behaving accordingly). Also, people are known to react favorably to generic humanized entities (see, for example, [39] on the advantages of humanizing a brand to get maximal response from people/consumers).
Once accepted that people are willing to work to humanize environments, then advantages in the reduction of stress levels and burning rate of professionals emerge (see [40] on the relations between stress levels and job satisfaction). Therefore, having people playing (possibly beyond simple interactions) with robots may come as a beneficial activity.
The same type of robot is being used in a pediatric ward of a hospital for basic interactions with children and acceptance studies. While the acceptance of the robot by the children was very good, it was a collection of a priori unexpected behaviors by visitors and staff that suggested the thesis in this paper.
Figure 5 shows a spontaneous interaction, triggered by a staff member, in the hospital ward. Even though the interaction capabilities of the robot were limited, as the staff member was already aware of, there was an urge to interact with the robot. Other examples have been observed, namely, when people in the ward perceive new skills for the robot being tested. A frequent example is for people to tap the robot lightly in the head while passing by in the corridor—this behavior seems analogous to the light physical contact sometimes observed among people in stable social environments.
In general, there will be a multitude of admissible reasons for such behaviors. One may conjecture an analogy with, for example, the small breaks staff members take during a normal daywork—which clearly include the escaping of social pressures.
The fact that, in both experiments presented, no auxiliary/technical staff to the robot was in sight may have influenced the triggering of such behaviors. The presence of the robot in the environment seems to represent a catalyst for such behaviors. However, it must be emphasized that the theatrical behavior observed does not correspond to a bilateral interaction—the robot is not humanly aware in an emotional sense, only in a perceptual/sensory sense.
The examples above merely suggest the thesis of the paper: robots can facilitate humanizing an environment. The fact that the experiments occurred in real environments reinforces the suggestion. However, for a thorough analysis, the experiments must have long duration, often not practical and involving significant resources. As studies on professional burnout emphasize the need to increase the interventions (see [41]) it is likely that more of such situations be reported.
The possibility that such behaviors by people towards social robots are caused by the mechanisms that regulate the empathy from humans towards inanimate objects (see [26,42]) cannot be discarded, i.e., it would be a natural biological response. Nevertheless, this may be a natural mechanism in human biology to keep humans within some moral boundaries. An alternative explanation is that the demanding nature of many healthcare jobs may foster the emerging of urges to establish relations with social robots.
The dynamics of such behaviors may be subject to a novelty effect. This would be analogous to behaviors often seen among humans when a new element enters a social circle, with an initial attention/interest peak followed by a decrease along some period of time, eventually reaching a stationary level. As aforementioned, identifying such dynamics, e.g., the mean time between activations may require long-run experiments, as, to avoid introducing any biases, these must be spontaneous.

5. Conclusions

Social robots are part of what is nowadays commonly called “always on” culture (see [43]), of which smartphones are common agents (de facto transforming their human owners into social robots). The associated risks are clearly identified, namely, in what concerns privacy and mental and physical health via the so called “telepressure” [44], that may lead to productivity/performance decreases (even though not all stress can be considered harmful [45]). This leads, naturally, to the current trend against social robots and the fears of dehumanization. However, humans are living since the dawn of mankind under weak privacy assumptions and have learned to dissimulate feelings/emotions/thoughts, avoiding that these could be misused by others. This paper goes against the backdrop of mainstream research, and points to the alternative possibility that robots can help humanizing.
In a mixed human-robot society, the humans can assume that robots have skills that foster privacy violations (allegedly) and, hence, develop their own skills to fool the robots, much as they do among themselves. This feedback process has the potential to improve the awareness of humans on their own personality processes and recognition of their frailties. This is likely to play a positive role in humanizing societies.
The emergence of negative opinions and phobias is likely to occur, as it is a common behavioral pattern from humans to react against changes (or threats), namely when novel technologies are being introduced in a socially stable environment. Technology acceptance models (TAMs) have identified numerous relevant factors (see, for instance, the literature review in [46]); the perceptions of usefulness and ease of use are the core of these models, and were originally proposed as the drives modulating the intention of a use of a technology (see [47]). Therefore, in addition to improvement of the awareness of their social condition, nudging people to perceive a potential added value of the technology is likely to boost their awareness to the field.
As referred in [1], lifelong learning will be a valuable skill in the future. This means that people will have to prepare themselves to live under a continuous learning requirement, this meaning learning to live among robots. Even though this process may not be without incidents, as referred to in [48], robots, automation, and augmentation technologies are here to stay.
Social robots are constantly evolving, supported by the increasing complexity in the technologies/systems they are made of. The number of these systems can easily grow, sometimes because they implement intrinsically complex systems, other times because implementation gets easier if a complex system is divided in subsystems. The bottom line is that the number of interconnected systems is growing, and new features are likely to add new systems to such complex networks, hence increasing the overall complexity. Though the main usage of TAMs is in the domain of Information Systems, the proximity with Social Robotics suggests that conclusions can also be drawn on this technology (several other models exist in the literature, e.g., the Unified Theory of Acceptance and Use of Technology Model, and the Theory of Planned Behavior, accounting, for example, for subjective norms—see, for instance, [49], [50], or [46] for literature reviews).
Two forces are running here in parallel: on one side, people will have increasingly better knowledge about the robots’ skills; on the other side, social robots will tend to be increasingly complex/sophisticated, and its predictability will decrease, i.e., it will be more like a human and less like a machine. The effect may be twofold. On one side, there may be a humanizing perception, as humans are often not predictable. Simultaneously, a fear of the unknown and/or some form of phobia against (social) robots may develop and, hence, constrain interactions.
Still, some authors argue that humanization of robots is a “problem’’ (see [51]), though humanization may smooth out integration, a number of problems deserve attention, e.g., how humans will manage their emotions and expectations towards the robots and other people that must interact with robots, that may require hindering the humanization.
The nudging by the media will, unquestionably, introduce a bias in the perception of social robots by people. Positive news is likely to increase the desire of trying them out. Negative news will send an opposite message, e.g., the “firing” of a humanoid robot that was supposed to help customers of a supermarket but that, in fact, was not much of a help. Fictional works, namely by the movie industry, producing believable scenarios, besides the nudging, represent, also, a form of acceptance test that may promote the introduction of social robots. This, however, is a two-faced tool, as it also exposes the technology frailties. From the “Robot & Frank” [52], to “Blade Runner” [53], social acceptance has been a concern of fictional movies. In the former example, the focus is put in the robot intellect while, in the later example, the appearance and physical resilience/strength of the robots is also a relevant factor. When considered jointly, these two fictional examples seem to ignore the uncanny valley paradigm (see [54]), as people do not seem to significantly care about the aspect of the robots—though they are anthropomorphized—implicitly suggesting that people are willing to accept morally challenging technologies, as social robots, regardless of appearance (the societies simply developed strategies to accommodate technology). Also, the fact that data is increasingly becoming a commodity implicitly empowers people to opt for alternative ways of living and, most importantly, assigns them the responsibility of self-education such that they understand the consequences of living in this new age in which robots are becoming omnipresent and develop skills to make any necessary adjustments, much like as anticipated in fictional worlds, such as those referred to above.

Funding

This research was partially funded by European Project FP7-ICT-9-2011-601033-MOnarCH and Portuguese FCT project [UID/EEA/50009/2013].

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Schwartz, J.; Bersin, J.; Bourke, J.; Danna, R.; Jarrett, M.; Knowles-Cutler, A.; Lewis, H.; Pelster, B. The Future of the Workforce—Critical Drivers and Challenges. Available online: https://www2.deloitte.com/content/dam/Deloitte/au/Documents/human-capital/deloitte-au-hc-future-of-workforce-critical-drivers-challenges-220916.pdf (accessed on 20 November 2018).
  2. Rosenberg, M. The Surprising Benefits of Robots in the DC. Available online: https://www.sdcexec.com/home/article/10269419/the-surprising-benefits-of-robots-in-the-dc (accessed on 20 November 2018).
  3. Jeong, S.; Logan, D.E.; Goodwin, M.S.; Graca, S.; O’Connell, B.; Goodenough, H.; Anderson, L.; Stenquist, N.; Fitzpatrick, K.; Zisook, M.; et al. A Social Robot to Mitigate Stress, Anxiety, and Pain in Hospital Pediatric Care. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015. [Google Scholar]
  4. Hurley, A.M.; Kennedy, P.J.; O’Connor, L.; Dinan, T.G.; Cryan, J.F.; Boylan, G.; O’Reilly, B.A. SOS save our surgeons: Stress levels reduced by robotic surgery. J. Gynecol. Surg. 2015, 12, 197–206. [Google Scholar] [CrossRef] [Green Version]
  5. Bursk, E.C. Low-Pressure Selling. Available online: https://hbr.org/2006/07/low-pressure-selling (accessed on 22 November 2018).
  6. Hosseini, S.H.; Goher, K.M. Personal Care Robots for Older Adults: An Overview. Available online: http://www.ccsenet.org/journal/index.php/ass/article/view/64518 (accessed on 22 November 2018).
  7. Chen, T.L.; Bhattacharjee, T.; Beer, J.M.; Ting, L.H.; Hackney, M.E.; Rogers, W.A.; Kemp, C.C. Older adults’ acceptance of a robot for partner dance-based exercise. PLoS ONE 2017, 12, e0182736. [Google Scholar] [CrossRef] [PubMed]
  8. Goher, K.M.; Mansouri, N.; Fadlallah, S.O. Assessment of personal care and medical robots from older adults’ perspective. Robot. Biomim. 2017, 4, 5. [Google Scholar] [CrossRef] [PubMed]
  9. Tajitsu, N. Japanese Automakers Look to Robots to Aid the Elderly. Available online: https://www.scientificamerican.com/article/japanese-automakers-look-to-robots-to-aid-the-elderly/ (accessed on 20 November 2018).
  10. Hardyman, R. Robotics in Medicine; Lucent Press, an Imprint of Greenhaven Publishing LLC: New York, NY, USA, 2018. [Google Scholar]
  11. Wada, K.; Shibata, T.; Musha, T.; Kimura, S. Robot therapy for elders affected by dementia. IEEE Eng. Med. Biol. Mag. 2008, 27, 53–60. [Google Scholar] [CrossRef]
  12. Wachsmuth, I. Robots like Me: Challenges and Ethical Issues in Aged Care. Front. Psychol. 2018, 9, 432. [Google Scholar] [CrossRef] [PubMed]
  13. Sparrow, R. Robots in aged care: A dystopian future? AI Soc. 2016, 31, 445–454. [Google Scholar] [CrossRef]
  14. Arbuckle, G. Humanizing Healthcare Reforms; Jessica Kingsley Publishers: London, UK, 2013. [Google Scholar]
  15. Riek, L. Healthcare Robotics. Commun. ACM 2017, 60, 68–78. [Google Scholar] [CrossRef]
  16. Robotics Business Review Staff. Healthcare Robotic: Current Market Trends and Future Opportunities. Available online: https://www.roboticsbusinessreview.com/health-medical/healthcare-robotics-current-market-trends-and-future-opportunities/ (accessed on 20 November 2018).
  17. Kim, M. Let Robots Handle Your Emotional Burnout at Work. The Way We Work: Labor and Automation. March 2017. Available online: https://howwegettonext.com/let-robots-handle-your-emotional-burnout-at-work-e09babbe81e8 (accessed on 22 November 2018).
  18. Pistono, F. Robots Will Steal Your Job but That’s OK; CreateSpace: Scotts Valley, CA, USA, 2012. [Google Scholar]
  19. Mauldin, J. Here’s How Robots Could Change the World by 2025. Available online: https://www.businessinsider.com/heres-how-robots-could-change-the-world-by-2025-2014-8 (accessed on 20 November 2018).
  20. Ackerman, E. Tesla Working Towards 90 Percent Autonomous Car within Three Years. Available online: https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/tesla-working-towards-90-autonomous-car-within-three-years (accessed on 22 November 2018).
  21. Sequeira, J.; Ferreira, I. Lessons from the MOnarCH Project. In Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2016), Lisbon, Portugal, 29–31 July 2016. [Google Scholar]
  22. Sciutti, A.; Mara, M.; Tagliasco, V.; Sandini, G. Humanizing human-robot interaction: On the importance of mutual understanding. IEEE Technol. Soc. Mag. 2018, 37, 22–29. [Google Scholar] [CrossRef]
  23. Clark, A.; Chalmers, D. The Extended Mind. Analysis 1998, 58, 7–19. [Google Scholar] [CrossRef]
  24. Mutimer, J.; O’Brien, K.P. Nudge Technology and Higher Education. eBook, Signal Vine, Alexandria VA, USA. Available online: https://www.signalvine.com/nudge-technology-and-higher-education (accessed on 20 November 2018).
  25. Carleton, R.N. Fear of the unknown: One fear to rule them all? J. Anxiety Disord. 2016, 42, 5–21. [Google Scholar] [CrossRef] [PubMed]
  26. Rosenthal-Von Der Pütten, A.M.; Schulte, F.P.; Eimler, S.C.; Sobieraj, S.; Hoffmann, L.; Maderwald, S.; Brand, M.; Krämer, N.C. Investigations on empathy towards humans and robots using fMRI. J. Comput. Hum. Behav. 2014, 33, 201–212. [Google Scholar] [CrossRef]
  27. Brant, T. Meet Sanbot, the Watson-Powered Droid, Here to Serve. Available online: https://www.pcmag.com/news/352776/meet-sanbot-the-watson-powered-droid-here-to-serve (accessed on 22 November 2018).
  28. Benartzi, S.; Beshears, J.; Milkman, K.L.; Sunstein, C.; Richard, H.T. Governments Are Trying to Nudge Us into Better Behavior. Is It Working? Available online: https://www.washingtonpost.com/news/wonk/wp/2017/08/11/governments-are-trying-to-nudge-us-into-better-behavior-is-it-working/?noredirect=on&utm_term=.bbffd2e5f71c (accessed on 22 November 2018).
  29. Schneider, C.; Weinmann, M.; vom Brocke, J. Digital Nudging-Influencing Choices by Using Interface Design. Commun. ACM 2018, 61. [Google Scholar] [CrossRef]
  30. Conversano, C.; Rotondo, A.; Lensi, E.; Della Vista, O.; Arpone, F.; Reda, M.A. Optimism and Its Impact on Mental and Physical Well-Being. Clin. Pract. Epidemiol. Ment. Health 2010, 6, 25–29. [Google Scholar] [CrossRef] [PubMed]
  31. Thripathi, I. Optimistic Outlook and Its Relation with Physical and Psychological Symptom Reporting. Int. J. Indian Psychol. 2017, 4, 5–10. [Google Scholar]
  32. Sequeira, J. Humans and Robots: A New Social order in Perspective? In Proceedings of the International Conference on Robot Ethics and Standards (ICRES 2017), Lisbon, Portugal, 20–21 October 2017. [Google Scholar]
  33. Nousiainen, A. Humanizing Workplaces—HR Executives’ Role in Fostering Systems Intelligence in Forerunning Companies. Master’s Thesis, Aalto University, Aalto, Finland, 8 May 2018. [Google Scholar]
  34. Unilever, Why Organizations Must Be More Human If They Are to Deliver Sustained Growth in a Connected World. Available online: https://www.ey.com/Publication/vwLUAssets/ey-being-more-human/$File/ey-being-more-human.pdf (accessed on 22 November 2018).
  35. Crain, M.G.; Kim, P.T.; Selmi, M.L. Work Law: Cases and Materials; Lexis Nevis Publishers: New York, NY, USA, 2010. [Google Scholar]
  36. Teyssier, M.; Bailly, G.; Pelachaud, C.; Lecolinet, E. MobiLimb: Augmenting Mobile Devices with a Robotic Limb. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany, 14 October 2018. [Google Scholar]
  37. Leininger, M.M. Cross-Cultural Hypothetical Functions of Caring and Nursing Care. Madeleine. In Caring: An Essential Human Need; Wayne State University Press: Detroit, MI, USA, 1988. [Google Scholar]
  38. Fry, S.T. The Philosophical Foundations of Caring. In Ethics and Moral Dimensions of Care; Leininger, M.M., Ed.; Wayne State University Press: Detroit, MI, USA, 1990. [Google Scholar]
  39. Calabro, K. Humanizing a Brand: Consumer Relationships Through an Anthropomorphic Lens. Elon J. Undergrad. Res. Commun. 2014, 5, 1–3. [Google Scholar]
  40. Fraser, T.M. Human Stress, Work and Job Satisfaction—A Critical Approach. In Occupational Safety and Health Series; No. 50; International Labour Office: Geneva, Switzerland, 1983. [Google Scholar]
  41. Bagnall, A.; Jones, R.; Akter, H.; Woodall, J.R. Interventions to prevent burnout in high risk individuals: Evidence review. Available online: https://www.gov.uk/government/publications/interventions-to-prevent-burnout-in-high-risk-individuals-evidence-review (accessed on 22 November 2018).
  42. Glausiusz, J. Empathy for Inanimate Objects. Available online: https://theamericanscholar.org/empathy-for-inanimate-objects/#.W_ZB5JoRVPY (accessed on 22 November 2018).
  43. Deal, J.J. Always On, Never Done? Don’t Blame the Smartphone. Available online: https://www.ccl.org/wp-content/uploads/2015/04/AlwaysOn.pdf (accessed on 22 November 2018).
  44. Ammar, J.; Santuzzi, A.; Barber, L. Are You Suffering from Telepressure? Time for the Cure. Available online: https://www.psychologytoday.com/us/blog/the-wide-wide-world-psychology/201504/are-you-suffering-telepressure-time-the-cure (accessed on 22 November 2018).
  45. Steakley, L. How the Stress of Our ‘Always on’ Culture Can Impact Performance, Health and Happiness. Scope+, Stanford Medicine. May 2014. Available online: https://scopeblog.stanford.edu/2014/05/05/how-the-stress-of-our-always-on-culture-can-impact-performance-health-and-happiness/ (accessed on 20 November 2018).
  46. Lai, P.C. The Literature Review of Technology Adoption Models and Theories for the Novelty Technology. J. Inf. Syst. Technol. Manag. 2017, 14, 21–38. [Google Scholar] [CrossRef]
  47. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–339. [Google Scholar] [CrossRef]
  48. Sloman, C.; Thomas, R.J. Humanizing Work through Digital. Accenture Report. 2016. Available online: https://www.accenture.com/t00010101T000000__w__/auen/_acnmedia/Accenture/Conversion-Assets/DotCom/Documents/Global/PDF/Dualpub_9/Accenture-Workforce-Future-Humanizing-Work-Through-Digital.pdf (accessed on 20 November 2018).
  49. Beer, J.M.; Prakash, A.; Mitzner, T.L.; Rogers, W.A. Understanding Robot Acceptance. Technical Report HFA-TR-1103, School of Psychology—Human Factors and Aging Laboratory. Available online: https://smartech.gatech.edu/bitstream/handle/1853/39672/HFA-TR-1103-RobotAcceptance.pdf?sequence=1&isAllowed=y (accessed on 22 November 2018).
  50. Wang, W.-T.; Liu, C.-Y. The Application of the Technology Acceptance Model: A New Way to Evaluate Information System Success. In Proceedings of the 23rd International System Dynamics Conference, Boston, MA, USA, 17–21 July 2005. [Google Scholar]
  51. Robert, L.P. The Growing Problem of Humanizing Robots. Int. Robot. Autom. J. 2017, 3, 43. [Google Scholar] [CrossRef]
  52. Schreier, J. Robot & Frank. Science-Fiction film, directed by Jake Schreier. 2012. Available online: https://en.wikipedia.org/wiki/Robot_%26_Frank (accessed on 22 November 2018).
  53. Scott, R. Blade Runner, Science-Fiction film, directed by Ridley Scott. 1982. Available online: https://en.wikipedia.org/wiki/Blade_Runner (accessed on 22 November 2018).
  54. Mori, M.; MacDorman, K.F.; Kageki, N. The Uncanny Valley. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
Figure 1. Pepper robots acting as hosts in large enterprise stands at the EuroCIS 2017 trade fair for retail technologies in Dusseldorf, Germany.
Figure 1. Pepper robots acting as hosts in large enterprise stands at the EuroCIS 2017 trade fair for retail technologies in Dusseldorf, Germany.
Information 09 00295 g001
Figure 2. Exploring the skills of the robot. (a) Person taking the initiative of starting an interaction with the robot; (b) Person looking back to confirm that the robot is not going to continue interacting.
Figure 2. Exploring the skills of the robot. (a) Person taking the initiative of starting an interaction with the robot; (b) Person looking back to confirm that the robot is not going to continue interacting.
Information 09 00295 g002
Figure 3. (a) Person trying to initiate an interaction without success (not getting a proper response from the robot); (b) Person initiating a successful interaction (getting a proper response from the robot).
Figure 3. (a) Person trying to initiate an interaction without success (not getting a proper response from the robot); (b) Person initiating a successful interaction (getting a proper response from the robot).
Information 09 00295 g003
Figure 4. A student asking the robot for marriage.
Figure 4. A student asking the robot for marriage.
Information 09 00295 g004
Figure 5. Hospital staff member spontaneous interaction with the robot.
Figure 5. Hospital staff member spontaneous interaction with the robot.
Information 09 00295 g005

Share and Cite

MDPI and ACS Style

Sequeira, J.S. Can Social Robots Make Societies More Human? Information 2018, 9, 295. https://doi.org/10.3390/info9120295

AMA Style

Sequeira JS. Can Social Robots Make Societies More Human? Information. 2018; 9(12):295. https://doi.org/10.3390/info9120295

Chicago/Turabian Style

Sequeira, João Silva. 2018. "Can Social Robots Make Societies More Human?" Information 9, no. 12: 295. https://doi.org/10.3390/info9120295

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop