Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study
Abstract
:1. Introduction
- Comprehensive literature analysis: This study provides a thorough and systematic review and analysis of the existing literature on the security aspects of SRPS, examining academic papers, reports, and standards across multiple databases. This forms a robust and detailed map of the current knowledge base.
- Identification of key security themes: The study identifies and categorises the key security themes pertinent to SRPS, including physical safety and integrity, data privacy and confidentiality, communication security, ethical considerations, and others. This thematic framework helps to structure the complex and diverse security issues associated with SRPS.
- Assessment of existing standards: The study conducts a detailed assessment of existing safety and security standards, revealing gaps where these standards—often developed for industrial robots—are inadequate for SRPS due to their unique operational environments and challenges.
- Proposal for new security guidelines: Based on the literature analysis and existing standards, this study proposes a set of comprehensive security guidelines specifically tailored to SRPS. These guidelines are designed to be actionable and can be used as a baseline for developing formal security standards for SRPS.
- Insights into cultural and ethical implications: The study sheds light on the broader cultural and ethical implications of deploying social robots in public spaces, fostering awareness of the need for SRPS to respect human rights, privacy, and social norms.
- Highlighting future research directions: This study outlines several future research paths for advancing the field, such as exploring cultural differences in SRPS acceptance, developing standardised testing protocols for these robots, and analysing real-world SRPS deployments.
- Value to stakeholders: The study offers invaluable insights for various stakeholders, including policymakers, roboticists, industry professionals, and the general public. The study provides a foundational resource for policymakers and industry professionals to inform the development of regulations and standards. For roboticists and developers, it offers a clear framework for designing and deploying SRPS with security at the forefront. For the general public, it aims to raise awareness of the potential risks and benefits associated with SRPS.
2. Related Works
2.1. Cybersecurity and Safety of Robots
2.2. Data Privacy Concerns
2.3. Ethical Considerations
2.4. Legal and Regulatory Frameworks
2.5. Human Interaction and Social Acceptance
2.6. Cultural Aspects of SRPS
2.7. Existing Systematic Reviews
2.8. Unique Contribution of This Study
3. Methodology
3.1. Planning Phase
3.1.1. Rationale and Objectives of the SMS
Research Questions
- Group 1: Unfolding Research Trends and Methodologies Concerning Security Aspects of SRPS
- RQ 1.1 How has the research focus on the security aspects of SRPS evolved over time? This question aims to provide a historical overview and trace the development and shifts in focus, offering insights into the trajectory of the research field.
- RQ 1.2 How can insights into the influence and impact of these studies be extracted from their citation network? Understanding the citation network can help identify key studies that have shaped the field, providing an understanding of their relevance and impact.
- RQ 1.3 What methodologies, types of studies, and thematic areas predominantly characterise research on the security aspects of SRPS? By identifying the methodologies and types of studies used, this question seeks to understand the approaches that have been most effective and prevalent in studying this topic.
- Group 2: In-depth Examination of Security Aspects, Normative Guidelines, and Design Principles in SRPS
- RQ 2.1 What are the specific security aspects consistently highlighted in studies on SRPS, and how are they defined? This question seeks to identify and understand the primary security concerns linked to SRPS.
- RQ 2.2 Which guidelines are frequently reported for bolstering the security of SRPS, and what key themes do they encompass? Understanding the existing guidelines for enhancing security can aid in establishing best practices and identifying gaps where new guidelines might be needed.
- RQ 2.3 What design principles are proposed for augmenting the security of SRPS, and what contributions do they make to the field? Identifying and understanding design principles can provide practical guidance for developing more secure SRPS.
3.1.2. Identification of Primary Studies
Information Source (Digital Database)
Search String Construction
Filtering Strategy
Inclusion and Exclusion Criteria
- Field of Study (IC1): Our study focuses on social robots operating in public spaces, with a specific interest in their security aspects. The definition of “public spaces” encompasses both indoor and outdoor areas that are publicly accessible.
- Methodology (IC2): We consider studies utilising various research methods, as diverse methodologies can provide a broader understanding of the field.
- Publication type (IC3): We only include peer-reviewed studies to ensure the credibility of the information used in our SMS.
- Language (IC4): To standardise the analysis process and eliminate language-related biases, we only include studies published in English.
- Publication period (IC5): Given the rapidly evolving nature of the field, we focus on studies published between 2016 and 2022 to ensure relevance and recency.
- Out of scope (EC1): We exclude studies that do not align with our specific domain of interest. For example, we omit studies concentrating on the appearance, acceptance, trust, and application of social robots unless they specifically address security aspects.
- Secondary studies (EC2): Secondary studies are excluded to ensure that our SMS is built upon primary research.
- Non-English language (EC3): Non-English studies are excluded to eliminate any potential inaccuracies stemming from translation.
- Duplicate studies (EC4): Duplicated studies or extended versions of original papers are excluded to avoid redundancies.
- Inaccessible studies (EC5): To maintain transparency and reproducibility, any studies that are not openly accessible are excluded.
- Front and back matter (EC6): Any search results that only contain front or back matter are excluded, as they do not contribute meaningful data.
- Non-peer-reviewed papers (EC7): Studies that have not undergone the peer-review process are excluded to ensure that only quality research contributes to our SMS.
Quality Assessment
3.1.3. Data Extraction
3.1.4. Validity Threats Identification and Mitigation
Selection Bias
Search String Limitations
Data Extraction Errors
Interpretation Bias
Quality Assessment
Addressing Search String Specificity and Incorporating Varied Terminologies
3.2. Conducting Phase
3.2.1. Searching the WoS Core Collection
3.2.2. Refinement of Search Results and Quality Assessment
3.2.3. Backward Snowballing
3.3. Reporting Phase
4. Results
4.1. Group 1: Unfolding Research Trends and Methodologies Concerning Security Aspects of SRPS
4.1.1. How Has the Research Focus on the Security Aspects of SRPS Evolved over Time
4.1.2. How Can Insights into the Influence and Impact of These Studies Be Extracted from Their Citation Network?
4.1.3. What Methodologies, Types of Studies, and Thematic Areas Predominantly Characterise Research on the Security Aspects of SRPS?
Research Types
- Solution proposals: These articles chiefly aim to identify challenges and propose technical or methodological remedies. It is the predominant research type in our data set with 18 studies (P01, P03, P04, P05, P07, P12, P13, P03S2, P05S6, P05S7, P05S8, P06S10, P06S14, P11S21, P13S22, P13S23, P13S24, and P14S25).
- Philosophical/conceptual analyses: These engage with the philosophical dimensions, ethical nuances, or critical discussions related to the topic. Thirteen studies (P02, P06, P10, P14, P03S1, P03S3, P06S9, P06S12, P06S13, P06S14, P10S16, P10S18, and P14S26) belong to this realm.
- Evaluation research/reports: These appraise specific facets or occurrences. There are seven studies (P09, P11, P03S2, P04S5, P06S15, P11S20, and P13S23) in this classification.
- Validation studies: These are dedicated to endorsing particular hypotheses or systems. Papers P08, P06S15, and P14S25 are representatives of this category.
- Experience reports: These elucidate findings and lessons drawn from specific experiences or enactments. Studies P06S11, P10S17, and P10S19 exemplify this category.
- Focus groups: We identified a singular focus group, labelled as P04S4.
Research Methods
- Qualitative: These studies gravitate towards non-numeric data, leveraging observations, discussions, or narrative interpretations. A total of 15 papers (P02, P06, P10, P03S1, P03S3, P04S4, P04S5, P06S9, P06S11, P06S12, P06S13, P10S16, P10S18, P10S19, and P14S26) in our compilation adopted this approach.
- Quantitative: Here, the emphasis is on numeric data, frequently accompanied by statistical scrutiny. In our data set, ten studies (P07, P08, P11, P13, P14, P03S2, P05S7, P06S15, P11S21, and P14S25) predominantly employed this method.
- Mixed: A significant portion of the studies, numbering 15 (P01, P03, P04, P05, P09, P12, P03S2, P05S6, P05S8, P06S10, P06S14, P06S15, P10S17, P13S23, and P13S24), amalgamated both qualitative and quantitative methods for a comprehensive analysis.
Thematic Insights
- Cybersecurity
- Encompasses a broad range of topics, from network, application, and cloud security to user education, identity management, and cybersecurity regulations.
- Included papers: P03, P04, P05, P06, P07, P10, P13, P03S2, P06S10, P06S13, P10S19, P13S22, P13S23, and P13S24.
- Represents 35% of our primary studies.
- Safety
- Discusses structural safety, motion, fail-safe mechanisms, protection against cyber–physical threats like stalking, emergency responses, and safety during maintenance.
- Included papers: P01, P11S20, and P11S21.
- Represents 7.5% of our primary studies.
- Privacy
- Delves into data collection, consent, anonymisation, transparency, and behavioural privacy. Topics like data security and control have been grouped under cybersecurity.
- Included papers: P03, P04, P04S4, P04S5, P06S13, and P14S26.
- Represents 15% of our primary studies.
- Reliability and continuity
- Covers aspects such as hardware and software reliability, maintenance, network connectivity, fault tolerance, and user interface consistency.
- Included papers: P11, P13, and P05S6.
- Represents 7.5% of our primary studies.
- Legal challenges
- Focuses on data privacy laws, cybersecurity standards, SR liability and insurance, telecommunication regulations compliance, and public space-specific regulations.
- Included papers: P10, P11, P13, P05S6, P06S12, P06S14, and P06S16.
- Represents 17.5% of our primary studies.
- Ethical concerns
- Encompasses themes like human autonomy, informed consent, justice, transparency, and sustainability.
- Included papers: P02, P06, P10, P03S1, P03S3, P06S11, P06S15, and P10S18.
- Represents 22.5% of our primary studies.
- Influence and manipulation:
- Explores user profiling, SR’s persuasive capabilities, and accountability mechanisms.
- Includes papers: P08, P09, P11, P12, P05S8, P14S25.
- Represents 15% of our primary studies.
4.2. Group 2: In-Depth Examination of Security Aspects, Normative Guidelines, and Design Principles in SRPS
4.2.1. Specific Security Aspects in SRPS Studies: Consistent Highlights and Definitions
- Cybersecurity of SRs and users: This aspect primarily concerns safeguarding SRPS systems, networks, and data from cyber threats. Common discussions in the studies pertain to the unique security challenges in SRPS, a broad array of security services ranging from secure bootstrapping and communication to data storage, software updates, and device management, and threats such as unauthorised publishing, unauthorised data access, and denial of service (DoS) attacks. A total of 21 papers, accounting for 52.5% of our primary studies, cover this aspect.
- Data privacy of users: Central to this domain is protecting user data, ensuring data handling that respects individual rights. Major themes involve communication security risk assessments, the enactment of access control policies, implications of AI and robots on privacy, championing the principle of “Privacy by design”, and emphasising that domestic robots should always respect user privacy expectations. This theme is covered in 11 papers, making up 27.5% of our primary studies.
- Physical safety of SRs and users: The focus here is on ensuring the safety of both users and the physical structure of the robots. Studies underscore the significance of safety, reliability, and continuity, particularly in socially assistive robots, and the imperative for designing SRPS systems with user safety as a paramount concern. This aspect is detailed in 11 papers, representing 27.5% of our primary studies.
- Reliability and continuity of SRs: This theme emphasises the crucial need for robots to function reliably and consistently. The importance of both reliability and continuity is frequently underscored in contexts such as socially assistive robot scenarios, and the discussions often touch upon the need for industrial control systems to remain operational even in challenging conditions. Six of our studies, which constitute 15% of the primary research, delve into this facet.
- Legal framework for SRPS: This dimension intersects the realms of technology and legal compliance. Key discussions revolve around the necessity of adhering to established legal norms pertinent to SRPS security, safety, and privacy. Additionally, there is a focus on the debate surrounding robot legal personhood and challenges associated with data recording and logging, especially when personal data are in play. Fourteen of our papers, representing 35% of the primary studies, focus on this area.
- Ethical Consideration for SRPS: This domain navigates the intricate waters of moral and philosophical considerations associated with robot deployment. Research consistently brings forth issues such as the inherent challenges in designing machines capable of ethical decision making, the myriad ethical dilemmas users may face when interfacing with AI technology, and the intertwined ethical and legal responsibilities of robot behaviour. This theme finds mention in 16 of our papers, encapsulating 40% of the primary studies.
- User influence and manipulation: This more nuanced theme seeks to understand the potential of robots to subtly shape or alter user behaviours, decisions, and perceptions. Central to this discourse is the unwavering commitment to user security in all its facets, from physical well-being to data integrity. The recurring motif emphasises robots operating reliably, upholding legal mandates, and embedding ethical considerations in both their design and operational paradigms. Five studies, constituting 12.5% of our primary research, touch upon this aspect.
4.2.2. Key Security Guidelines for Social Robots in Public Spaces
- Physical safety and integrity: Robots must have systems to detect and prevent unwanted contacts, operate safely, and feature easily accessible emergency stop functions. This is supported by 27.5% of our primary studies and further buttressed by standards like EN ISO 13482:2014. Safety standards originally aimed at industrial robots should be revamped for the unique demands of SRPS. New safety standards tailored specifically for SRPS are essential, as the current standards do not adequately address the unique safety requirements inherent to SRPS [66].
- Data privacy and confidentiality: Robots should not record, store, or transmit personal data without explicit users’ consent. If data are recorded, strong encryption should be used to protect them. Data collection should adhere to data protection regulations and guidelines such as GDPR and national data protection laws. More than 11 papers in our study reaffirm this theme (P02, P03, P04, P05, P06, P07, P12, P13, P14, P03S1, P03S2, P03S3, P04S4, P04S5, P05S7, P06S12, P06S14, P10S17, P14S25, and P14S26). Applicable privacy standards include NIST SP 800-122 (guide to protecting the confidentiality of personally identifiable information—PII), NIST SP800-53 (security and privacy controls for federal information systems and organisation), ISO/IEC 27701 (privacy information management), and ISO/IEC 27018 (code of practice for protection of PII). The existing data privacy and confidentiality standards, if adequately implemented in SRPS, could address any concern in this aspect.
- Communication security: Communication between robots and control servers or other devices should be encrypted. Secure protocols like TLS should be used for any data transmission. Network vulnerabilities should be regularly assessed and patched. SRPS heavily rely on wireless communication due to their mobile and autonomous nature. Applicable standards include 3GPP 5G advance standards (security architecture and procedures for 5G system release 18—TS 33.501 v 18.0.0); WPA3: Wi-Fi Protected Access III by the Wi-Fi Alliance; NIST SP 800-77, 800-52, and 800-113, addressing different aspects of IPsec, SSL, and TLS security; and IETF (Internet Engineering Task Force) RFC 8446 on TLS v1.3 and 7296 on IKEv2, widely used for IPSec VPNs. Use cases heavily influence communication security standards; hence, there is a need for SRPS use cases and standards [10]. Almost all papers addressing cybersecurity reaffirmed the need for communication security as a theme.
- Authentication and authorisation: Only authorised personnel should have access to the robot’s controls and data. User roles and permissions should be clearly defined. Passwords or other authentication methods should be regularly updated and changed. This theme is a subset of the cybersecurity of SRPS. Applicable security standards reaffirming this theme include (i) NIST SP 800-63 focusing on digital identity guidelines, and (ii) IETF RFC 6749 (OAuth 2.0) and RFC 4120 (The Kerberos network authentication service v.5), among others.
- Operational transparency: The robot should clearly indicate when it is recording or collecting data. Robots should be easily identifiable with visible markings or badges. This theme is a subset of data privacy. Applicable standards include IEEE P7001-2021 [67] focusing on measurable, testable levels of transparency for autonomous systems [68], and ISO/TS 15066 [69], focusing on collaborative robot systems and include safety requirements that can be used as a foundation for transparency around safety.
- Robustness against cyber attacks: The robot’s software should be regularly updated to patch known vulnerabilities. There should be a mechanism to detect and respond to any unauthorised intrusion or malware. For our SMS, this theme is a subset of cybersecurity. Applicable standards include (i) ISO/SAE 21434—focusing on cybersecurity risk management for autonomous systems, including threat analysis and vulnerability assessments; (ii) ISO/IEC 27001 ong general information security; (iii) IEC 62443 focusing on industrial network security, which can be applied for broader robotic applications; and (iv) NIST SP 800-183 focusing on IoT security, which can be adapted for SRPS.
- Human interaction protocols: SR should have guidelines on interacting with different age groups, especially vulnerable populations like children, people living with disability, and senior citizens. SR should be designed to understand and respect social norms and boundaries. This guideline is reaffirmed by P09, P12, P03S1, and P03S2, among others of our primary studies. Most existing standards focus on users’ safety, privacy and security. This theme needs special attention for the successful adoption of SRPS.
- Monitoring and reporting: Robots should be monitored for any irregular or unintended behaviours. Any security breaches or unusual events should be logged and reported promptly. Again, this theme is extensively covered by most cybersecurity themes of SRPS.
- Environment-aware operation: The robots should know their environment and adjust their operation mode accordingly. For instance, a robot should operate differently in a crowded space than an empty one.
- Regular testing and validation: The robot’s systems should be regularly tested to ensure that they function correctly and safely. Various scenarios can be simulated to validate the robot’s response to security and safety situations.
- Ethical considerations: Always consider the ethical implications of deploying SR, especially regarding privacy and human rights. Guidelines and policies should be in place to prevent misuse or unethical behaviour by robots. Several calls, debates and concerns about the ethical implications of SRPS need to be standardised.
4.2.3. What Design Principles Are Proposed for Augmenting the Security of SRPS, and What Contributions Do They Make to the Field?
- Security by design: This principle emphasises integrating security measures and considerations into the system’s design and architecture from the outset. It ensures that security requirements, controls, and mechanisms are incorporated throughout the development lifecycle. This creates inherently secure and resilient systems, reducing vulnerabilities and potential threats. Studies P03, P07, P13, and P13S22 explicitly referenced this principle, while others indirectly suggested its application.
- Privacy by design: This principle focuses on proactively integrating privacy considerations into developing systems, products, and processes. It ensures that individuals’ privacy is safeguarded from the start, contributing to enhanced data protection, regulatory compliance, user trust, and ethical innovation. P06S15 and P14S26 explicitly mentioned this principle, while other studies implied its application.
- Human-centred design: This approach places users and their interactions at the core of design processes, creating user-friendly and relevant solutions. It contributes to security by ensuring user-friendly security features, usable authentication methods, accessible security, and promoting user trust in technology adoption. Studies P09, P12, P03S1, P03S2, and P03S3 directly referenced this principle.
- Least privilege: This principle dictates granting users and processes the minimum necessary access rights for their tasks. It limits attack surfaces, unauthorised actions, lateral movement during attacks, and the impact of breaches. Studies P07 and P03S2 explicitly mentioned this principle, while others indirectly referred to it.
5. Discussion
5.1. Principal Insights and Practical Applications Derived from the Study
- Emerging security and safety standards Our study emphasises the pressing need for updated and comprehensive security and safety standards tailored to address the unique requirements of SRPS. Current standards, predominantly based on industrial robots, are ill-suited to govern the new social robots interacting closely with the public in diverse environments.
- Data privacy and integrity As identified in our research, robust data privacy and confidentiality measures are essential. With myriad potential data interactions between SRPS and the public, strict adherence to data protection laws, such as GDPR, and the implementation of strong encryption protocols are imperative.
- Human-centric interaction protocols Our study reiterates the need for clear and ethical interaction protocols, particularly when robots interact with vulnerable populations. These protocols should be informed by a deep understanding of social norms and human behaviour, advocating for respect and empathy in robot design.
- Responsive and adaptive operations SRPS need to be cognizant of their environments and capable of adapting their behaviours accordingly. This ensures the safety of the individuals they interact with and the robots’ integrity.
- Ethical considerations Our study underscores the burgeoning debate on the ethical implications of SRPS. As these robots become more integrated into public life, they must be designed and programmed to respect human rights and operate within clear ethical boundaries.
- Robot Design
- Addressing safety and ethical considerations through human-centric interaction protocols: Our findings stress the need for SRPS to have clearly defined ethical interaction protocols, especially when engaging with vulnerable population groups. This implies that robots should be designed to understand and respect human norms and behaviours, ensuring that empathy and respect are central to their programming.
- Ensuring data privacy through robust security protocols: The highlighted need for stringent data privacy and integrity measures necessitates that social robots be equipped with sophisticated security protocols to guard against data breaches and ensure compliance with the GDPR and national applicable laws.
- Regulatory Frameworks
- Addressing legal aspects through tailored legal frameworks: The study suggests a significant gap in the existing legal frameworks to accommodate the unique challenges SRPS poses. Legislators can draw insights from this study to craft laws that govern the operation and deployment of SRPS, addressing issues like user influence and potential manipulations.
- Addressing ethical concerns through ethical oversights: Regulatory bodies would benefit from establishing committees to oversee the ethical dimensions of SRPS, ensuring that they operate within defined ethical boundaries and respect human rights. This could also involve formulating standardised testing and validation protocols for evaluating SRPS before deployment.
- Deployment Strategies
- Employing adaptive operations through responsive SRPS deployment: When deploying SRPS, it is vital to ensure that they are cognizant of their surroundings and can adapt their behaviours accordingly, safeguarding both the individuals they interact with and maintaining their own integrity.
- Improving users’ experience through user-centric design: Insights from this study should encourage developers to focus on enhancing the user experience, paying attention to aspects like accessibility and the psychological impact SRPS could have on individuals and communities.
- Shaping future research through trans-disciplinary collaboration Stakeholders in the industry should foster collaborations with researchers to delve into the prospective research avenues identified in this study, working towards the secure, ethical, and effective integration of SRPS in public spaces.
5.2. Conceptual Framework Illustrating the Interrelated Key Dimensions of Security in SRPS
5.3. Delving into the Security Challenges of SRPS: Concrete Instances and Forward-Looking Solutions
- Data privacy and integrity
- Challenges: The interception of data transferred between SRPS and central servers leading to privacy breaches.
- Potential solutions: Implementing end-to-end encryption and robust authentication mechanisms.
- Practical implications: Enhancing data privacy will build trust among users and foster broader acceptance of SRPS.
- Safety standards
- Challenges: The existing safety standards are derived from industrial robot frameworks unsuited for SRPS operating in public spaces with diverse and dynamic environments.
- Potential solutions: Developing safety standards specifically tailored for SRPS, emphasising real-time adaptive safety mechanisms.
- Practical implications: Customised safety standards would ensure the safe interaction of SRPS with humans, mitigating risks and preventing accidents.
- Ethical considerations
- Challenges: The potential for SRPS to be used unethically, such as for surveillance or influencing user behaviour subtly.
- Potential solutions: Establishing ethical guidelines that dictate the operations of SRPS, including transparency in data usage and respecting user autonomy.
- Practical implications: Addressing ethical concerns would foster a responsible deployment of SRPS, safeguarding individual and societal values.
- Legal frameworks
- Challenges: The current legal frameworks are inadequate to address the unique challenges posed by SRPS, including liability issues in case of malfunctions or accidents.
- Potential solutions: Crafting comprehensive legal frameworks that outline the responsibilities and liabilities associated with deploying SRPS.
- Practical implications: Legal frameworks would provide a clear pathway for accountability, promoting responsible innovation and deployment of SRPS.
- Human-centric interaction protocols
- Challenges: Designing SRPS that can appropriately and ethically interact with diverse populations, including vulnerable groups.
- Potential solutions: Incorporating a deep understanding of social norms and human behaviour into the design of interaction protocols, guided by interdisciplinary teams including psychologists, sociologists, and ethicists.
- Practical implications: Human-centric designs would foster positive interactions between humans and SRPS, enhancing the overall user experience and promoting inclusivity.
- Cybersecurity
- Challenges: The potential for SRPS to be targeted in cyber-attacks, including hacking and unauthorised control.
- Potential solutions: Developing sophisticated cybersecurity protocols and frameworks, including regular updates and patches to address vulnerabilities.
- Practical implications: Strengthening cybersecurity would protect SRPS from malicious attacks, ensuring their safe and reliable operation.
- User influence and manipulation
- Challenges: The possibility for SRPS to influence or manipulate users through persuasive design techniques unduly.
- Potential solutions: Creating guidelines that restrict manipulative design practices and ensure the transparent operation of SRPS.
- Practical implications: Addressing this challenge would preserve user autonomy and prevent potential misuse of SRPS in public spaces.
5.4. Future Research Directions in SRPS Security
- Cultural Variability in SRPS Acceptance
- Research Area: Investigating how different cultures perceive and interact with SRPS.
- Unanswered Question: How can SRPS be designed to align with various cultural norms and expectations without compromising security?
- Standardised Testing and Validation Protocols
- Research Area: Developing robust and standardised protocols for testing and validating SRPS security measures.
- Unanswered Question: How can these protocols encompass a wide array of scenarios to ensure the readiness of SRPS for deployment in diverse public spaces?
- Real-World Case Studies Analysis
- Research Area: Conducting in-depth analyses of real-world SRPS deployments to glean insights into practical challenges and the effectiveness of existing security measures.
- Unanswered Question: What lessons can be drawn from practical deployments to inform more secure and efficient future SRPS implementations?
- Comprehensive Security Framework Development
- Research Area: Creating a comprehensive security framework that integrates various aspects like data privacy and ethical considerations into the SRPS development process.
- Unanswered Question: How can this framework serve as a benchmark for evaluating the security readiness of SRPS before their deployment?
- Ethical and Regulatory Implications
- Research Area: Exploring the legal landscape and the ethical dimensions governing the deployment and operation of SRPS.
- Unanswered Question: What new laws or amendments are needed to address the unique challenges posed by SRPS, and how can they safeguard individual and community well-being?
- User Experience and Human–Robot Interaction
- Research Area: Investigating the psychological impact of SRPS on individuals and communities, focusing on user experience design and accessibility.
- Unanswered Question: How can SRPS be designed to enhance user experience while mitigating potential negative psychological impacts?
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Data Extraction Checklist
- Procedure
- Group 1: Data Related to Research Trends
- Study ID: Unique identifier for each study
- Title of Study: Official name of the study
- Year of Publication: The year the study was published
- Author(s) Names: The names of the researchers who conducted the study
- Author’s Affiliation Type: The sector to which the authors belong (Academia, Industry, or Both)
- Source of Study: The bibliographic database from which the study was sourced
- Type of Study: The format in which the study was published (Journal Article, Conference Paper, or Book Section)
- Name of Journal/Conference: The name of the journal or conference where the study was published
- Number of Citations (from Google Scholar): The frequency with which the study has been cited, as indicated by Google Scholar
- Focus of the Study: The primary topic or objective of the study
- Type of Research: The nature of the research conducted (Quantitative, Qualitative, or Mixed)
- Research Method Employed: The approach or methodology used to conduct the research (Validation, Evaluation, Solution Proposal, Philosophical Paper, Opinion Paper, or Experience Report)
- Group 2: Data Related to Security Aspects
- Reported Security Aspects: The different security aspects addressed in the study (Cybersecurity, Privacy, Safety, Reliability and Continuity, Legal aspects, Ethical aspects, and User Influence and Manipulation)
- Reported Security Principles: Any principles or guidelines related to security that are discussed in the study.
References
- Mubin, O.; Ahmad, M.I.; Kaur, S.; Shi, W.; Khan, A. Social Robots in Public Spaces: A Meta-review. In Proceedings of the Social Robotics, Qingdao, China, 28–30 November 2018; Lecture Notes in Computer Science. Ge, S.S., Cabibihan, J.J., Salichs, M.A., Broadbent, E., He, H., Wagner, A.R., Castro-González, Á., Eds.; pp. 213–220. [Google Scholar] [CrossRef]
- Aymerich-Franch, L.; Ferrer, I. Social Robots as a Brand Strategy. In Innovation in Advertising and Branding Communication; Routledge: London, UK, 2020. [Google Scholar]
- Kyrarini, M.; Lygerakis, F.; Rajavenkatanarayanan, A.; Sevastopoulos, C.; Nambiappan, H.R.; Chaitanya, K.K.; Babu, A.R.; Mathew, J.; Makedon, F. A Survey of Robots in Healthcare. Technologies 2021, 9, 8. [Google Scholar] [CrossRef]
- Guggemos, J.; Seufert, S.; Sonderegger, S.; Burkhard, M. Social Robots in Education: Conceptual Overview and Case Study of Use. In Orchestration of Learning Environments in the Digital World; Cognition and Exploratory Learning in the Digital Age; Ifenthaler, D., Isaías, P., Sampson, D.G., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 173–195. [Google Scholar] [CrossRef]
- Oruma, S.O.; Sánchez-Gordón, M.; Colomo-Palacios, R.; Gkioulos, V.; Hansen, J.K. A Systematic Review on Social Robots in Public Spaces: Threat Landscape and Attack Surface. Computers 2022, 11, 181. [Google Scholar] [CrossRef]
- Sarrica, M.; Brondi, S.; Fortunati, L. How Many Facets Does a “Social Robot” Have? A Review of Scientific and Popular Definitions Online. Inf. Technol. People 2019, 33, 1–21. [Google Scholar] [CrossRef]
- Golubchikov, O.; Thornbush, M. Artificial Intelligence and Robotics in Smart City Strategies and Planned Smart Development. Smart Cities 2020, 3, 1133–1144. [Google Scholar] [CrossRef]
- Ayele, Y.Z.; Chockalingam, S.; Lau, N. Threat Actors and Methods of Attack to Social Robots in Public Spaces. In Proceedings of the HCI for Cybersecurity, Privacy and Trust, Copenhagen, Denmark, 23–28 July 2023; Lecture Notes in Computer Science. Moallem, A., Ed.; pp. 262–273. [Google Scholar] [CrossRef]
- Morales, C.G.; Carter, E.J.; Tan, X.Z.; Steinfeld, A. Interaction Needs and Opportunities for Failing Robots. In Proceedings of the 2019 on Designing Interactive Systems Conference, New York, NY, USA, 17–19 August 2019; DIS ’19. pp. 659–670. [Google Scholar] [CrossRef]
- Oruma, S.O.; Petrovic, S. Security Threats to 5G Networks for Social Robots in Public Spaces: A Survey. IEEE Access 2023, 11, 63205–63237. [Google Scholar] [CrossRef]
- Boada, J.P.; Maestre, B.R.; Genís, C.T. The Ethical Issues of Social Assistive Robotics: A Critical Literature Review. Technol. Soc. 2021, 67, 101726. [Google Scholar] [CrossRef]
- Fosch-Villaronga, E.; Mahler, T. Cybersecurity, Safety and Robots: Strengthening the Link between Cybersecurity and Safety in the Context of Care Robots. Comput. Law Secur. Rev. 2021, 41, 105528. [Google Scholar] [CrossRef]
- Mayoral-Vilches, V. Robot Cybersecurity, a Review. Int. J. Cyber Forensics Adv. Threat. Investig. 2021, 20. [Google Scholar]
- Mavrogiannis, C.; Baldini, F.; Wang, A.; Zhao, D.; Trautman, P.; Steinfeld, A.; Oh, J. Core Challenges of Social Robot Navigation: A Survey. ACM Trans. Hum.-Robot Interact. 2023, 12, 36:1–36:39. [Google Scholar] [CrossRef]
- Lutz, C.; Schöttler, M.; Hoffmann, C.P. The Privacy Implications of Social Robots: Scoping Review and Expert Interviews. Mob. Media Commun. 2019, 7, 412–434. [Google Scholar] [CrossRef]
- Andraško, J.; Mesarčík, M.; Hamuľák, O. The Regulatory Intersections between Artificial Intelligence, Data Protection and Cyber Security: Challenges and Opportunities for the EU Legal Framework. AI Soc. 2021, 36, 623–636. [Google Scholar] [CrossRef]
- Baisch, S.; Kolling, T.; Schall, A.; Rühl, S.; Selic, S.; Kim, Z.; Rossberg, H.; Klein, B.; Pantel, J.; Oswald, F.; et al. Acceptance of Social Robots by Elder People: Does Psychosocial Functioning Matter? Int. J. Soc. Robot. 2017, 9, 293–307. [Google Scholar] [CrossRef]
- Recchiuto, C.T.; Sgorbissa, A. A Feasibility Study of Culture-Aware Cloud Services for Conversational Robots. IEEE Robot. Autom. Lett. 2020, 5, 6559–6566. [Google Scholar] [CrossRef]
- Vulpe, A.; Crăciunescu, R.; Drăgulinescu, A.M.; Kyriazakos, S.; Paikan, A.; Ziafati, P. Enabling Security Services in Socially Assistive Robot Scenarios for Healthcare Applications. Sensors 2021, 21, 6912. [Google Scholar] [CrossRef] [PubMed]
- Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for Conducting Systematic Mapping Studies in Software Engineering: An Update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
- Petersen, K.; Feldt, R.; Mujtaba, S.; Mattsson, M. Systematic Mapping Studies in Software Engineering. In Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering (EASE), Bari, Italy, 26–27 June 2008. [Google Scholar] [CrossRef]
- Kitchenham, B. Procedures for Performing Systematic Reviews. In Keele University Technical Report TR/SE-0401; Keele University: Newcastle, UK, 2004; Volume 33, pp. 1–26. [Google Scholar]
- Kitchenham, B.; Brereton, P. A Systematic Review of Systematic Review Process Research in Software Engineering. Inf. Softw. Technol. 2013, 55, 2049–2075. [Google Scholar] [CrossRef]
- Weidt, F.; Rodrigo, S. Systematic Literature Review in Computer Science-a Practical Guide; Technical Report RelaTeDCC 002/2016; Federal University of Juiz de Fora: Juiz de Fora, Brazil, 2016. [Google Scholar]
- Oruma, S.O.; Ayele, Y.; Sechi, F.; Rødsehol, H. Supplementary Materials to “Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study”—Mendeley Data. Mendeley Data 2023. [Google Scholar] [CrossRef]
- Ampatzoglou, A.; Bibi, S.; Avgeriou, P.; Verbeek, M.; Chatzigeorgiou, A. Identifying, Categorizing and Mitigating Threats to Validity in Software Engineering Secondary Studies. Inf. Softw. Technol. 2019, 106, 201–230. [Google Scholar] [CrossRef]
- Sjøberg, D.I.; Bergersen, G.R. Improving the Reporting of Threats to Construct Validity. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering, Oulu, Finland, 14–16 June 2023; EASE ’23. pp. 205–209. [Google Scholar] [CrossRef]
- Liu, X.; Ge, S.S.; Zhao, F.; Mei, X. A Dynamic Behavior Control Framework for Physical Human-Robot Interaction. J. Intell. Robot. Syst. 2020, 101, 14. [Google Scholar] [CrossRef]
- Farina, M.; Zhdanov, P.; Karimov, A.; Lavazza, A. AI and Society: A Virtue Ethics Approach. AI Soc. 2022, 1–14. [Google Scholar] [CrossRef]
- Marchang, J.; Di Nuovo, A. Assistive Multimodal Robotic System (AMRSys): Security and Privacy Issues, Challenges, and Possible Solutions. Appl. Sci. 2022, 12, 2174. [Google Scholar] [CrossRef]
- Sharkey, A.; Sharkey, N. We Need to Talk about Deception in Social Robotics! Ethics Inf. Technol. 2021, 23, 309–316. [Google Scholar] [CrossRef]
- Cerrudo, C. Hacking Robots Before Skynet. In Cybersecurity Insights; IOActive Inc.: Seattle, WA, USA, 2017; pp. 1–17. [Google Scholar]
- Wachter, S.; Mittelstadt, B.; Floridi, L. Transparent, Explainable, and Accountable AI for Robotics. Sci. Robot. 2017, 2, eaan6080. [Google Scholar] [CrossRef]
- Lin, P.C.; Yankson, B.; Chauhan, V.; Tsukada, M. Building a Speech Recognition System with Privacy Identification Information Based on Google Voice for Social Robots. J. Supercomput. 2022, 78, 15060–15088. [Google Scholar] [CrossRef]
- Krupp, M.M.; Rueben, M.; Grimm, C.M.; Smart, W.D. Privacy and Telepresence Robotics: What Do Non-scientists Think? In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 6–9 March 2017; HRI ’17. pp. 175–176. [Google Scholar] [CrossRef]
- Rueben, M.; Bernieri, F.J.; Grimm, C.M.; Smart, W.D. Framing Effects on Privacy Concerns about a Home Telepresence Robot. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 6–9 March 2017; HRI ’17. pp. 435–444. [Google Scholar] [CrossRef]
- Abate, A.F.; Barra, P.; Bisogni, C.; Cascone, L.; Passero, I. Contextual Trust Model with a Humanoid Robot Defense for Attacks to Smart Eco-Systems. IEEE Access Pract. Innov. Open Solut. 2020, 8, 207404–207414. [Google Scholar] [CrossRef]
- Silva, J.R.; Simão, M.; Mendes, N.; Neto, P. Navigation and Obstacle Avoidance: A Case Study Using Pepper Robot. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019; Volume 1, pp. 5263–5268. [Google Scholar] [CrossRef]
- Barra, P.; Bisogni, C.; Rapuano, A.; Abate, A.F.; Iovane, G. HiMessage: An Interactive Voice Mail System with the Humanoid Robot Pepper. In Proceedings of the 2019 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Fukuoka, Japan, 5–8 August 2019; pp. 652–656. [Google Scholar] [CrossRef]
- Abate, A.F.; Bisogni, C.; Cascone, L.; Castiglione, A.; Costabile, G.; Mercuri, I. Social Robot Interactions for Social Engineering: Opportunities and Open Issues. In Proceedings of the 2020 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Calgary, AB, Canada, 17–22 August 2020; pp. 539–547. [Google Scholar] [CrossRef]
- Poulsen, A.; Fosch-Villaronga, E.; Burmeister, O.K. Cybersecurity, Value Sensing Robots for LGBTIQ+ Elderly, and the Need for Revised Codes of Conduct. Australas. J. Inf. Syst. 2020, 24, 1–16. [Google Scholar] [CrossRef]
- Clark, G.W.; Doran, M.V.; Andel, T.R. Cybersecurity Issues in Robotics. In Proceedings of the 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Savannah, GA, USA, 27–31 March 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Cresswell, K.; Cunningham-Burley, S.; Sheikh, A. Health Care Robotics: Qualitative Exploration of Key Challenges and Future Directions. J. Med. Internet Res. 2018, 20, e10410. [Google Scholar] [CrossRef]
- Fosch-Villaronga, E. Robots, Healthcare, and the Law: Regulating Automation in Personal Care, 1st ed.; Routledge: London, UK, 2019. [Google Scholar] [CrossRef]
- Fosch-Villaronga, E.; Felzmann, H.; Ramos-Montero, M.; Mahler, T. Cloud Services for Robotic Nurses? Assessing Legal and Ethical Issues in the Use of Cloud Services for Healthcare Robots. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 290–296. [Google Scholar] [CrossRef]
- Fosch-Villaronga, E.; Millard, C. Cloud Robotics Law and Regulation: Challenges in the Governance of Complex and Dynamic Cyber–Physical Ecosystems. Robot. Auton. Syst. 2019, 119, 77–91. [Google Scholar] [CrossRef]
- Poulsen, A.; Burmeister, O.K.; Kreps, D. The Ethics of Inherent Trust in Care Robots for the Elderly. In This Changes Everything—ICT and Climate Change: What Can We Do? IFIP Advances in Information and Communication Technology; Kreps, D., Ess, C., Leenen, L., Kimppa, K., Eds.; Springer Nature Switzerland AG: Cham, Switzerland, 2018; pp. 314–328. [Google Scholar] [CrossRef]
- Bryson, J.J.; Diamantis, M.E.; Grant, T.D. Of, for, and by the People: The Legal Lacuna of Synthetic Persons. Artif. Intell. Law 2017, 25, 273–291. [Google Scholar] [CrossRef]
- Zhang, Y.; Qian, Y.; Wu, D.; Hossain, M.S.; Ghoneim, A.; Chen, M. Emotion-Aware Multimedia Systems Security. IEEE Trans. Multimed. 2019, 21, 617–624. [Google Scholar] [CrossRef]
- Saunderson, S.P.; Nejat, G. Persuasive Robots Should Avoid Authority: The Effects of Formal and Real Authority on Persuasion in Human-Robot Interaction. Sci. Robot. 2021, 6, eabd5186. [Google Scholar] [CrossRef] [PubMed]
- Schneider, S.; Liu, Y.; Tomita, K.; Kanda, T. Stop Ignoring Me! On Fighting the Trivialization of Social Robots in Public Spaces. Acm Trans.-Hum. Interact. 2022, 11, 11:1–11:23. [Google Scholar] [CrossRef]
- Giansanti, D.; Gulino, R.A. The Cybersecurity and the Care Robots: A Viewpoint on the Open Problems and the Perspectives. Healthcare 2021, 9, 1653. [Google Scholar] [CrossRef] [PubMed]
- Gordon, J.S. Building Moral Robots: Ethical Pitfalls and Challenges. Sci. Eng. Ethics 2020, 26, 141–157. [Google Scholar] [CrossRef]
- Miller, J.; Williams, A.B.; Perouli, D. A Case Study on the Cybersecurity of Social Robots. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; HRI ’18. pp. 195–196. [Google Scholar] [CrossRef]
- Akalin, N.; Kristoffersson, A.; Loutfi, A. The Influence of Feedback Type in Robot-Assisted Training. Multimodal Technol. Interact. 2019, 3, 67. [Google Scholar] [CrossRef]
- Akalin, N.; Kristoffersson, A.; Loutfi, A. Evaluating the Sense of Safety and Security in Human–Robot Interaction with Older People. In Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction; Human–Computer Interaction Series; Korn, O., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 237–264. [Google Scholar] [CrossRef]
- Akalin, N.; Kiselev, A.; Kristoffersson, A.; Loutfi, A. An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security. In Proceedings of the Social Robotics, Tsukuba, Japan, 22–24 November 2017; Lecture Notes in Computer Science. Kheddar, A., Yoshida, E., Ge, S.S., Suzuki, K., Cabibihan, J.J., Eyssel, F., He, H., Eds.; Springer Nature Switzerland AG: Cham, Switzerland, 2017; pp. 628–637. [Google Scholar] [CrossRef]
- Randall, N.; Sabanovic, S.; Milojevic, S.; Gupta, A. Top of the Class: Mining Product Characteristics Associated with Crowdfunding Success and Failure of Home Robots. Int. J. Soc. Robot. 2022, 14, 149–163. [Google Scholar] [CrossRef]
- Mazzeo, G.; Staffa, M. TROS: Protecting Humanoids ROS from Privileged Attackers. Int. J. Soc. Robot. 2020, 12, 827–841. [Google Scholar] [CrossRef]
- Breiling, B.; Dieber, B.; Schartner, P. Secure Communication for the Robot Operating System. In Proceedings of the 2017 Annual IEEE International Systems Conference (SysCon), Montreal, QC, Canada, 24–27 April 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Dieber, B.; Breiling, B.; Taurer, S.; Kacianka, S.; Rass, S.; Schartner, P. Security for the Robot Operating System. Robot. Auton. Syst. 2017, 98, 192–203. [Google Scholar] [CrossRef]
- Dieber, B.; Kacianka, S.; Rass, S.; Schartner, P. Application-Level Security for ROS-based Applications. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 4477–4482. [Google Scholar] [CrossRef]
- Chatterjee, S.; Chaudhuri, R.; Vrontis, D. Usage Intention of Social Robots for Domestic Purpose: From Security, Privacy, and Legal Perspectives. Inf. Syst. Front. 2021, 1–16. [Google Scholar] [CrossRef]
- Chatterjee, S. Impact of AI Regulation on Intention to Use Robots: From Citizens and Government Perspective. Int. J. Intell. Unmanned Syst. 2019, 8, 97–114. [Google Scholar] [CrossRef]
- Pagallo, U. The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design. In Data Protection on the Move: Current Developments in ICT and Privacy/Data Protection; Law, Governance and Technology Series; Gutwirth, S., Leenes, R., De Hert, P., Eds.; Springer: Dordrecht, The Netherlands, 2016; pp. 387–410. [Google Scholar] [CrossRef]
- Salvini, P.; Paez-Granados, D.; Billard, A. On the Safety of Mobile Robots Serving in Public Spaces: Identifying Gaps in EN ISO 13482:2014 and Calling for a New Standard. ACM Trans. Hum.-Robot Interact. 2021, 10, 1–27. [Google Scholar] [CrossRef]
- 7001-2021—IEEE Standard for Transparency of Autonomous Systems. Available online: https://ieeexplore.ieee.org/document/9726144 (accessed on 19 September 2023).
- Winfield, A.F.T.; Booth, S.; Dennis, L.A.; Egawa, T.; Hastie, H.; Jacobs, N.; Muttram, R.I.; Olszewska, J.I.; Rajabiyazdi, F.; Theodorou, A.; et al. IEEE P7001: A Proposed Standard on Transparency. Front. Robot. AI 2021, 8, 665729. [Google Scholar] [CrossRef] [PubMed]
- ISO/TS 15066: 2016 Robots and Robotic Devices—Collaborative Robots. Available online: https://www.iso.org/standard/62996.html (accessed on 19 September 2023).
- Oruma, S.O. Towards a User-centred Security Framework for Social Robots in Public Spaces. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering, Oulu, Finland, 14–16 June 2023; EASE ’23. pp. 292–297. [Google Scholar] [CrossRef]
# | Search String | Results | URL |
---|---|---|---|
1 | “social robot*” | 1874 | https://www.webofscience.com/wos/woscc/summary/ca310483-0044-4a22-87bf-b9e7509c4271-9988ee6c/relevance/1 (accessed on 23 July 2023) |
2 | “*security” | 174,251 | https://www.webofscience.com/wos/woscc/summary/e076d38e-5e94-49ad-96a2-b1300e5a415c-99890f92/relevance/1 (accessed on 23 July 2023) |
3 | #1 AND #2 | 30 | https://www.webofscience.com/wos/woscc/summary/ed038b0d-c93f-4778-822e-286eea6badae-99892213/relevance/1 (accessed on 23 July 2023) |
Study ID | # of Refs. | Round 1 | Round 2 | Assigned Study ID |
---|---|---|---|---|
P01 | 37 | 1 | 0 | |
P02 | 135 | 0 | 0 | |
P03 | 79 | 7 | 3 | P03S1, P03S2, P03S3 |
P04 | 80 | 3 | 2 | P04S5, P04S5 |
P05 | 48 | 12 | 3 | P05S6, P05S7, P05S8 |
P06 | 58 | 13 | 7 | P06S9, P06S10 P06S11, P06S12, P06S13, P06S14, P06S15 |
P07 | 29 | 0 | 0 | |
P08 | 110 | 0 | 0 | |
P09 | 42 | 5 | 0 | |
P10 | 84 | 9 | 4 | P10S16, P10S17, P10S18, P10S19 |
P11 | 48 | 2 | 2 | P11S20, P11S21 |
P12 | 86 | 4 | 0 | |
P13 | 46 | 9 | 3 | P13S22, P13S23, P13S24 |
P14 | 111 | 4 | 2 | P14S25, P14S26 |
Total | 993 | 69 | 26 |
ID | Year | Author | Title |
---|---|---|---|
P01 | 2020 | Liu et al. [28] | A Dynamic Behavior Control Framework for Physical Human-Robot Interaction |
P02 | 2022 | Farina et al. [29] | AI and society: a virtue ethics approach |
P03 | 2022 | Marchang and Di Nuovo [30] | Assistive Multimodal Robotic System (AMRSys): Security and Privacy Issues, Challenges, and Possible Solutions |
P03S1 | 2021 | Sharkey and Sharkey [31] | We need to talk about deception in social robotics! |
P03S2 | 2017 | Cerrudo [32] | Hacking Robots Before Skynet |
P03S3 | 2017 | Wachter et al. [33] | Transparent, explainable, and accountable AI for robotics |
P04 | 2022 | Lin et al. [34] | Building a speech recognition system with privacy identification information based on Google Voice for social robots |
P04S4 | 2017 | Krupp et al. [35] | Privacy and Telepresence Robotics: What do Non-scientists Think? |
P04S5 | 2017 | Rueben et al. [36] | Framing Effects on Privacy Concerns about a Home Telepresence Robot |
P05 | 2020 | Abate et al. [37] | Contextual trust model with a humanoid robot defense for attacks to smart eco-systems |
P05S6 | 2019 | Silva et al. [38] | Navigation and obstacle avoidance: a case study using Pepper robot |
P05S7 | 2019 | Barra et al. [39] | HiMessage: An Interactive Voice Mail System with the Humanoid Robot Pepper |
P05S8 | 2020 | Abate et al. [40] | Social Robot Interactions for Social Engineering: Opportunities and Open Issues |
P06 | 2020 | Poulsen et al. [41] | Cybersecurity, value sensing robots for LGBTIQ+ elderly, and the need for revised codes of conduct |
P06S10 | 2017 | Clark et al. [42] | Cybersecurity issues in robotics |
P06S11 | 2018 | Cresswell et al. [43] | Health Care Robotics: Qualitative Exploration of Key Challenges and Future Directions |
P06S12 | 2019 | Fosch-Villaronga [44] | Robots, Healthcare, and the Law: Regulating Automation in Personal Care |
P06S13 | 2018 | Fosch-Villaronga et al. [45] | Cloud services for robotic nurses? Assessing legal and ethical issues in the use of cloud services for healthcare robots |
P06S14 | 2019 | Fosch-Villaronga and Millard [46] | Cloud robotics law and regulation: Challenges in the governance of complex and dynamic cyber–physical ecosystems |
P06S15 | 2018 | Poulsen et al. [47] | The Ethics of Inherent Trust in Care Robots for the Elderly |
P06S9 | 2017 | Bryson et al. [48] | Of, for, and by the people: the legal lacuna of synthetic persons |
P07 | 2019 | Zhang et al. [49] | Emotion-aware multimedia systems security |
P08 | 2021 | Saunderson and Nejat [50] | Persuasive robots should avoid authority: The effects of formal and real authority on persuasion in human-robot interaction |
P09 | 2022 | Schneider et al. [51] | Stop Ignoring Me! On Fighting the Trivialization of Social Robots in Public Spaces |
P10 | 2021 | Giansanti and Gulino [52] | The cybersecurity and the care robots: A viewpoint on the open problems and the perspectives |
P10S16 | 2021 | Fosch-Villaronga and Mahler [12] | Cybersecurity, safety and robots: Strengthening the link between cybersecurity and safety in the context of care robots |
P10S17 | 2021 | Vulpe et al. [19] | Enabling Security Services in Socially Assistive Robot Scenarios for Healthcare Applications |
P10S18 | 2020 | Gordon [53] | Building Moral Robots: Ethical Pitfalls and Challenges |
P10S19 | 2018 | Miller et al. [54] | A Case Study on the Cybersecurity of Social Robots |
P11 | 2019 | Akalin et al. [55] | The influence of feedback type in robot-assisted training |
P11S20 | 2019 | Akalin et al. [56] | Evaluating the Sense of Safety and Security in Human–Robot Interaction with Older People |
P11S21 | 2017 | Akalin et al. [57] | An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security |
P12 | 2022 | Randall et al. [58] | Top of the class: Mining product characteristics associated with crowdfunding success and failure of home robots |
P13 | 2020 | Mazzeo and Staffa [59] | TROS: Protecting humanoids ROS from privileged attackers |
P13S22 | 2017 | Breiling et al. [60] | Secure communication for the robot operating system |
P13S23 | 2017 | Dieber et al. [61] | Security for the Robot Operating System |
P13S24 | 2016 | Dieber et al. [62] | Application-level security for ROS-based applications |
P14 | 2021 | Chatterjee et al. [63] | Usage intention of social robots for domestic purpose: From security, privacy, and legal perspectives |
P14S25 | 2019 | Chatterjee [64] | Impact of AI regulation on intention to use robots: From citizens and government perspective |
P14S26 | 2016 | Pagallo [65] | The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design |
Affiliation | Papers | Study IDs |
---|---|---|
Academia | 33 | P01, P02, P03, P04, P05, P06, P07, P08, P10, P11, P12, P13, P14, P03S1, P03S3, P04S4, P04S5, P05S6, P05S7, P06S9, P06S10, P06S11, P06S12, P06S13, P06S14, P06S15, P10S16, P10S18, P10S19, P11S20, P11S21, P14S25, and P14S26 |
Industry | 1 | P03S2 |
Mixed | 6 | P09, P05S8, P10S17, P13S22, P13S23 and P13S24. |
# | ID | Cit | Avg.C | # | ID | Cit | Avg.C | # | ID | Cit | Avg.C | # | ID | Cit | Avg.C |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | P06S9 | 292 | 49 | 11 | P06S14 | 56 | 14 | 21 | P04S5 | 32 | 5 | 31 | P12 | 3 | 3 |
2 | P03S3 | 237 | 40 | 12 | P13S22 | 85 | 14 | 22 | P06S13 | 27 | 5 | 32 | P04S4 | 13 | 2 |
3 | P03S1 | 73 | 37 | 13 | P13S24 | 97 | 14 | 23 | P10 | 8 | 4 | 33 | P05S6 | 7 | 2 |
4 | P06S11 | 130 | 26 | 14 | P14S25 | 54 | 14 | 24 | P11 | 17 | 4 | 34 | P05S7 | 9 | 2 |
5 | P13S23 | 133 | 22 | 15 | P10S17 | 25 | 13 | 25 | P11S20 | 17 | 4 | 35 | P06 | 5 | 2 |
6 | P10S18 | 50 | 17 | 16 | P14 | 21 | 11 | 26 | P01 | 9 | 3 | 36 | P09 | 2 | 2 |
7 | P07 | 64 | 16 | 17 | P10S19 | 41 | 8 | 27 | P05 | 10 | 3 | 37 | P11S21 | 11 | 2 |
8 | P03S2 | 88 | 15 | 18 | P02 | 7 | 7 | 28 | P05S8 | 9 | 3 | 38 | P13 | 7 | 2 |
9 | P06S10 | 87 | 15 | 19 | P08 | 14 | 7 | 29 | P06S15 | 14 | 3 | 39 | P14S26 | 17 | 2 |
10 | P06S12 | 57 | 14 | 20 | P03 | 5 | 5 | 30 | P10S16 | 6 | 3 | 40 | P04 | 0 | 0 |
Categories | Studies | Study IDs | |
---|---|---|---|
Types | Solution proposal | 18 | P01, P03, P04, P05, P07, P12, P13, P03S2, P05S6, P05S7, P05S8, P06S10, P06S14, P11S21, P13S22, P13S23, P13S24, and P14S25. |
Philosophical | 13 | P02, P06, P10, P14, P03S1, P03S3, P06S9, P06S12, P06S13, P06S14, P10S16, P10S18, and P14S26. | |
Evaluation | 7 | P09, P11, P03S2, P04S5, P06S15, P11S20, and P13S23. | |
Validation | 3 | P08, P06S15, and P14S25. | |
Experience report | 3 | P06S11, P10S17, and P10S19. | |
Focus group | 1 | P04S4 | |
Methods | Quantitative | 10 | P07, P08, P11, P13, P14, P05S7, P11S20, P11S21, P13S22, and P14S25. |
Qualitative | 15 | P02, P06, P10, P03S1, P03S3, P04S4, P04S5, P06S9, P06S11, P06S12, P06S13, P10S16, P10S18, P10S19, and P14S26. | |
Mixed | 15 | P01, P03, P04, P05, P09, P12, P03S2, P05S6, P05S8, P06S10, P06S14, P06S15, P10S17, P13S23, and P13S24. |
Categories | Studies | Study IDs |
---|---|---|
Cybersecurity of SR and Users | 21 | P03, P05, P06, P07, P10, P12, P13, P03S2, P03S3, P04S4, P05S7, P05S8, P06S9, P06S10, P06S13, P10S16, P10S17, P10S19, P13S22, P13S23, P13S24 |
Data Privacy of Users | 20 | P02, P03, P04, P05, P06, P07, P12, P13, P14, P03S1, P03S2, P03S3, P04S4, P04S5, P05S7, P06S12, P06S14, P10S17, P14S25, P14S26 |
Physical Safety of SR and Users | 11 | P01, P11, P05S6, P06S10, P06S12, P06S13, P10S16, P10S17, P11S20, P11S21, P13S23 |
Reliability and Continuity of SR | 6 | P01, P12, P03S2, P06S10, P10S17, P13S23 |
Legal Framework for SRPS | 14 | P02, P10, P14, P03S2, P03S3, P06S9, P06S11, P06S13, P06S14, P10S16, P10S17, P13S23, P14S25, P14S26 |
Ethical consideration for SRPS | 16 | P02, P06, P09, P10, P03S1, P03S2, P03S3, P04S5, P06S9, P06S11, P06S12, P06S13, P06S15, P10S18, P14S25, P14S26 |
User influence and manipulation | 5 | P08, P09, P03S1, P05S8, and P06S13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Oruma, S.O.; Ayele, Y.Z.; Sechi, F.; Rødsethol, H. Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study. Sensors 2023, 23, 8056. https://doi.org/10.3390/s23198056
Oruma SO, Ayele YZ, Sechi F, Rødsethol H. Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study. Sensors. 2023; 23(19):8056. https://doi.org/10.3390/s23198056
Chicago/Turabian StyleOruma, Samson Ogheneovo, Yonas Zewdu Ayele, Fabien Sechi, and Hanne Rødsethol. 2023. "Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study" Sensors 23, no. 19: 8056. https://doi.org/10.3390/s23198056
APA StyleOruma, S. O., Ayele, Y. Z., Sechi, F., & Rødsethol, H. (2023). Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study. Sensors, 23(19), 8056. https://doi.org/10.3390/s23198056