Next Article in Journal
Generative Artificial Intelligence and the Creative Industries: A Bibliometric Review and Research Agenda
Previous Article in Journal
Foreign Direct Investment and Economic Growth in Central and Eastern Europe: Systems Thinking, Feedback Loops, and Romania’s FDI Premium
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Framework to Reliable Practice: End-User Perspectives on Social Robots in Public Spaces

by
Samson Ogheneovo Oruma
1,2,
Ricardo Colomo-Palacios
3,* and
Vasileios Gkioulos
2
1
Department of Computer Science and Communication, Østfold University College, 1756 Halden, Norway
2
Department of Information Security and Communication Technology, Norwegian University of Science and Technology, 2802 Gjøvik, Norway
3
Escuela Técnica Superior de Ingenieros Informáticos, Universidad Politécnica de Madrid, 28660 Madrid, Spain
*
Author to whom correspondence should be addressed.
Systems 2026, 14(2), 137; https://doi.org/10.3390/systems14020137
Submission received: 15 December 2025 / Revised: 7 January 2026 / Accepted: 27 January 2026 / Published: 29 January 2026

Abstract

As social robots increasingly enter public environments, their acceptance depends not only on technical robustness but also on ethical integrity, accessibility, transparency, and consistent system behaviour across diverse users. This paper reports an in situ pilot deployment of an ARI social robot functioning as a university receptionist, designed and implemented in alignment with the SecuRoPS framework for secure, ethical, and reliable social robot deployment. Thirty-five students and staff interacted with the robot in a real public setting and provided structured feedback on safety, privacy, usability, accessibility, ethical transparency, and perceived reliability. The results indicate strong user confidence in physical safety, data protection, and regulatory compliance while revealing persistent challenges related to accessibility and interaction dynamics. These findings show that reliability in public-facing robotic systems extends beyond fault-free operation to include equitable and consistent user experience across contexts. Beyond reporting empirical outcomes, the study contributes in three key ways. First, it demonstrates a reproducible method for operationalising lifecycle governance frameworks in real-world deployments. Second, it provides new empirical insights into how trust, accessibility, and transparency are experienced by end users in public spaces. Third, it delivers a publicly available, open-source GitHubrepository containing reusable templates for ARI robot applications developed using the PAL Robotics ARI SDK (v23.12), lowering technical entry barriers and supporting reproducibility. By integrating empirical evaluation with practical system artefacts, this work advances research on reliable intelligent environments and provides actionable guidance for the responsible deployment of social robots in public spaces.

1. Introduction

Social robots are no longer confined to laboratories or exhibitions; they are increasingly deployed in public spaces such as shopping malls [1], airports [2], hospitals [3], and educational institutions [4]. In these environments, robots are expected to serve as assistants, guides, and information providers, complementing existing human service and care infrastructures [5]. While these applications highlight the promise of social robotics, they also raise profound societal concerns. Issues of privacy, trust, accessibility, and ethical behaviour become especially pressing in uncontrolled, high-traffic environments where diverse users interact with the technology [6].
Existing scholarly and industrial frameworks have begun to address the ethical and technical challenges associated with social robots. However, most evaluations remain conceptual or expert-driven, with limited empirical validation involving end users in real-world contexts [7,8,9]. For example, IEEE’s Ethically Aligned Design provides high-level principles for responsible AI, but offers limited guidance on how these principles can be operationalised in deployed systems [10]. Similarly, security-oriented frameworks such as NIST CSF or MITRE ATT&CK focus primarily on digital threats and vulnerabilities, providing insufficient coverage of physical safety, bystander privacy, accessibility, and other human-centred dimensions that are integral to embodied robots operating in public spaces [11,12].
At the same time, recent advances in robotics have demonstrated high levels of technical robustness and reliability in controlled, task-oriented domains, such as complex dual-arm assembly using reinforcement learning and sim-to-real transfer [13]. While such work highlights important progress in system-level reliability, it is largely situated in structured industrial environments and does not address the sociotechnical challenges that arise when robots interact with heterogeneous users in open public settings. As a result, a persistent gap remains between theoretical guidance, advances in technical robustness, and the lived experiences of end users interacting with social robots in practice.
To address this gap, the SecuRoPS (Security Framework for Social Robots in Public Spaces) framework was proposed as a lifecycle-oriented approach that integrates security, safety, ethical compliance, and user-centred design [7]. Previous work has established its conceptual foundations and architectural views [14,15]. However, little empirical evidence exists on how systems developed in alignment with SecuRoPS are perceived by end users. In particular, it remains unclear whether the framework’s principles translate into perceived trustworthiness, inclusivity, and reliable operation in real-world public deployments.
This study addresses this limitation through an in situ pilot deployment of an ARI social robot functioning as a receptionist at Østfold University College. Thirty-five participants interacted with the robot and provided structured feedback on safety, privacy, usability, accessibility, ethical transparency, and perceived reliability. In this paper, trust is understood as users’ subjective confidence in the robot’s intentions and behaviour; ethical transparency refers to the visibility and clarity of the robot’s data practices and decision logic; and reliability is conceptualised as the system’s ability to deliver consistent and dependable performance across diverse users and interaction contexts. These concepts are treated as interrelated properties of a sociotechnical system rather than isolated technical attributes.
Guided by this framing, the study is structured around the following research questions:
  • RQ1: How do end users perceive safety, privacy, and ethical transparency when interacting with a public-facing social robot?
  • RQ2: How do users evaluate accessibility and inclusivity in a real-world robot deployment?
  • RQ3: To what extent do SecuRoPS-aligned design choices influence user trust and perceived reliability?
  • RQ4: What practical reliability lessons emerge from deploying a social robot in a public educational environment?
The contribution of this paper is fourfold.
1.
Empirical Validation in Public Space—It presents one of the first in situ, end-user validations of a lifecycle-based framework for secure and ethical robot deployment, conducted in a real university reception environment.
2.
Societal Insights into Trust, Ethics, and Accessibility—It provides rich qualitative and quantitative evidence showing how users perceive safety, privacy, inclusivity, and transparency when interacting with a public-facing robot, highlighting gaps in accessibility and the role of societal narratives in shaping trust.
3.
Bridging Frameworks and Practice—It demonstrates how theoretical principles of ethical and secure design can be operationalised in practice, offering lessons that extend beyond compliance to anticipatory and inclusive design strategies.
4.
Open-Source Technical Contribution—It delivers a publicly available GitHub repository containing reusable templates and implementation resources for the ARI platform, lowering barriers for beginners in robotics research and enabling reproducibility across institutions.
The structure of the paper is as follows: Section 2 reviews related work on security, ethics, reliability, and inclusivity in social robotics. Section 3 describes the methodology, including the research design, system implementation, and ethical considerations. Section 4 presents quantitative and qualitative results from the end-user study. Section 5 discusses the findings in relation to responsible robotics, system reliability, and human–robot interaction, including limitations and threats to validity. Finally, Section 6 concludes the paper and outlines directions for future research on reliable intelligent environments.

2. Related Work

2.1. Social Robots in Public Spaces: Contexts and Functions

Social robots have transitioned from controlled laboratories to everyday public settings, including educational institutions [4], healthcare facilities [3], transportation hubs [16], and retail environments [17]. In universities and classrooms, work has examined acceptance, usability, and pedagogical value, highlighting the importance of interaction design and multi-modality for engagement [4,18]. In hospitals and care environments, reviews emphasize technical and organizational requirements alongside ethical constraints [3]. Airport and retail deployments further underscore the need for robust interaction flows and clear role expectations in crowded, time-pressured spaces [1,2,17]. Across these domains, a recurring theme is that deployment context, with its norms, crowd dynamics, and user goals, shapes perceptions of utility, safety, and trust.

2.2. Ethics, Privacy, and Responsible Social Robotics

Contemporary scholarship stresses that acceptance depends not only on technical reliability but also on ethical integrity, transparency, and inclusivity [19,20,21]. Practical guidance has begun to emerge: Callari et al. have proposed expert-informed ethical frameworks for human–robot collaboration [22], while the IEEE Ethically Aligned Design principles foreground human well-being and transparency in AI systems [10]. Yet, these resources often remain high-level, with limited evidence on how to operationalize principles in situated deployments. Recent work on data minimization offers concrete tactics to reduce unnecessary data collection and processing [23], aligning with GDPR’s spirit, but again, implementations in embodied public-space robots are still under-documented.

2.3. Cybersecurity Frameworks vs. Embodied, Public-Facing Robots

Mainstream cybersecurity frameworks, such as the NIST Cybersecurity Framework (CSF), MITRE ATT&CK, and the Cyber Kill Chain, offer robust models for risk assessment, threat classification, and incident response in digital infrastructures [11,12,24,25]. They provide well-established taxonomies of adversarial behavior, tools for vulnerability management, and structured incident-handling processes that are highly effective for enterprise IT systems.
However, these frameworks were not designed for embodied, interactive agents operating in open public environments. Social robots introduce risks that extend beyond traditional digital threats, encompassing physical safety, bystander privacy, accessibility, inclusivity, and broader human factors integral to HRI. For example, an embodied robot must simultaneously protect against network intrusion and ensure that its physical presence does not cause harm or exclusion. These dimensions of reliability and trustworthiness are largely absent from conventional IT security models.
Practical secure-by-design measures in robotics, therefore, require additional layers of consideration, including runtime containment (e.g., containerized execution), hardened communication channels, and supply-chain integrity. Recent evidence on vulnerabilities in container ecosystems demonstrates both the opportunities and risks of such approaches [26]. Bridging IT-centric cybersecurity with sociotechnical safety thus remains an open challenge for public-space robotics.
The SecuRoPS framework responds to this gap by extending security concerns into a lifecycle perspective that explicitly incorporates resilience, ethical safeguards, inclusivity, and user trust. By integrating technical robustness with human-centered design principles, SecuRoPS contributes a more holistic model of reliable deployment—one that is sensitive not only to digital reliability but also to the embodied, interactive, and societal dimensions of robots in public environments.

2.4. Trust, Acceptance, and Transparency in HRI

A large body of HRI research examines the antecedents of trust (appearance, voice, motion, transparency) and acceptance (perceived usefulness, ease of use, social presence). Studies in customer service and hospitality highlight the role of rapport, interaction naturalness, and expectation management [27,28]. Work on mind perception and implicit associations shows how subtle design cues shape whether users ascribe agency or benevolence to robots [29]. Philosophical and conceptual critiques warn that trust without trustworthiness is possible if surface cues are decoupled from robust safeguards, making verifiable transparency essential [30]. In practice, research consistently shows that visible data practices, plain-language explanations, opt-in choices, and clear signals of GDPR compliance improve user confidence [22].

2.5. Accessibility and Inclusivity in Public Deployments

Accessibility is often underrepresented in evaluations, despite being a central component of equitable public services. Educational deployments report strong usability when interfaces are clear and multimodal [18], yet real-world cases reveal persistent barriers: screen height for wheelchair users, insufficient font or contrast, lack of speech input or sign language, and limited support for non-dominant languages [31]. As robots transition into public infrastructures (libraries, hospitals, transport hubs), accessibility must evolve from a compliance checklist to an anticipatory design principle that proactively accommodates diverse abilities and contexts.

2.6. From Threat Landscape to Lifecycle Governance: SecuRoPS

Survey and mapping studies have characterized the threat landscape for social robots in public spaces, documenting attack surfaces, privacy risks, and organizational challenges [32]. Building on this evidence, SecuRoPS was proposed as a lifecycle governance framework that integrates risk assessment, ethical and legal scrutiny, usability, and stakeholder engagement across phases from analysis to retirement [14]. Subsequent architectural work elaborated business, system, and security views tailored to public-space deployments [15]. Yet, despite conceptual maturation, end-user validation of such frameworks in real-world conditions has been scarce.

2.7. Reproducibility, Tooling, and Entry Barriers

HRI deployments often rely on proprietary SDKs and complex integration stacks (web front-end + ROS back-end), creating entry barriers for new researchers and institutions. Educational texts and tutorials support onboarding (e.g., ROS introductions) [33], but reusable, open templates specific to public-space deployments remain limited. Open-source, well-documented exemplars are vital to reproducibility and to translating ethical and security principles into deployable artefacts that newcomers can adapt.

2.8. Positioning This Study

Against this backdrop, the present work contributes along two axes. First, it provides in situ, end-user validation of selected SecuRoPS phases in a public educational setting, connecting framework ideals to user perceptions of safety, privacy, usability, accessibility, and transparency. Second, it delivers a publicly available GitHub repository with reusable ARI templates and data-handling modules, thereby lowering entry barriers for robotics beginners and supporting reproducibility in HRI. In doing so, this study addresses the documented gaps between high-level guidance and operational practice, as well as between lab-based evaluations and the complexities of public-space interaction.

3. Methodology

3.1. Research Design

This study employed an exploratory pilot design to investigate how end users perceive a social robot deployed in a real-world educational environment. Pilot studies are well suited for examining emerging sociotechnical systems, where the aim is to gain early empirical insights rather than to test predefined hypotheses or achieve statistical generalisability. An overview of the research process is illustrated in Figure 1, outlining the sequential steps from ethical approval through deployment, interaction, and data analysis.
Selected phases of the SecuRoPS framework for secure, ethical, and reliable social robot deployment were operationalised [7]. The study focused on user perceptions of safety, privacy, usability, accessibility, ethical transparency, and perceived reliability. By situating the robot in a university reception area, the research captured authentic interactions in a public-facing environment rather than a controlled laboratory setting, allowing system performance and user experience to be evaluated under realistic conditions.

3.2. Ethical Considerations

The research protocol was reviewed and approved by the Norwegian Agency for Shared Services in Education and Research (Sikt1 reference number 328081). Participants were fully informed about the purpose of the study, the voluntary nature of participation, their right to withdraw at any time, and the intended use of the collected data. In line with GDPR principles, data minimisation was strictly applied: only the optional name and email fields were presented, and these were not required for participation.
To further protect privacy and support ethical transparency, all cameras on the robot were physically masked to prevent inadvertent recording [34]. These measures ensured that privacy protection and transparency were not only policy commitments but also embedded into the technical and procedural design of the study.

3.3. Participants

Thirty-five participants were recruited from Østfold University College, comprising both staff (48.6%) and students (45.7%) across multiple faculties. Recruitment was voluntary and open, resulting in diversity in age, gender, academic background, and prior experience with social robots. The sample size is consistent with exploratory human–robot interaction studies, where the emphasis is on depth of user feedback and contextual understanding rather than statistical inference [29].

3.4. System Implementation

The study was conducted using an ARI social robot programmed to function as a university receptionist. As shown in Figure 2, the robot was deployed in a stationary configuration within a public reception area. It delivered bilingual (English and Norwegian) information about study programmes and campus life through web-based slides, videos, and interactive presentations.
A custom virtual keyboard was integrated into the robot’s interface, allowing participants to optionally enter their name and email address. This feature was designed to evaluate perceptions of transparency and trust in data collection practices. All entries were stored locally on the robot and deleted upon request, and were excluded from the research analysis.
The application was implemented using standard web technologies (HTML, CSS, and JavaScript) with a ROS-based backend. The complete software stack is made publicly available through a dedicated GitHub repository [35]. This repository provides reusable templates and example implementations intended to lower entry barriers for researchers and educators new to proprietary robotic platforms such as ARI. In this sense, the repository constitutes a methodological contribution, enabling reproducibility and supporting reliable deployment practices beyond the present study.
Technical safeguards were applied in accordance with security-by-design principles, including VPN-enabled communication, restricted execution environments, and firewall protections [15]. These measures contributed not only to security and privacy but also to system reliability by reducing unplanned behaviour, limiting error propagation, and supporting stable operation throughout the deployment.

3.5. Procedure

The study followed four sequential phases:
  • Phase 1: Informed Consent. Participants accessed a digital Participant Information Sheet via a QR code and provided informed consent using the Nettskjema platform (see Appendix A).
  • Phase 2: Demographic Data Collection. Participants completed a demographic questionnaire covering age, gender, faculty affiliation, and prior experience with social robots via a second secure Nettskjema form (see Appendix B).
  • Phase 3: Interaction with the ARI Robot. Participants interacted with the robot as it delivered institutional information using multiple modalities, including web-based presentations, image slideshows, videos, and PowerPoint-style content. Participants were optionally invited to test the virtual keyboard and data-entry functionality, following the interaction guidelines provided in Appendix C.
  • Phase 4: Post-Interaction Assessment. Participants completed a structured feedback survey via Nettskjema, assessing perceptions of physical safety, data privacy, physical and cybersecurity protections, interface usability, accessibility, ethical transparency, and GDPR compliance (see Appendix D).

3.6. Data Management and Analysis

All consent forms, demographic data, and survey responses were stored securely using Nettskjema and Østfold University College’s Office 365 infrastructure. Optional personal data entered during interaction were stored locally on the robot and excluded from analysis. All datasets were anonymised prior to processing.
Given the exploratory nature of the pilot study and the modest sample size, quantitative responses were analysed descriptively to identify overall trends in user perceptions. This approach is appropriate for early-stage investigations aimed at informing design and deployment practices rather than testing causal relationships. Qualitative feedback was analysed thematically to identify recurring patterns related to trust, usability, accessibility, ethical transparency, and perceived reliability.

4. Results

4.1. Participant Demographics

The study involved 35 participants, comprising a balanced gender distribution (51% male, 49% female) and a broad age range. Participants included university staff (48.6%) and students (45.7%), with representation across faculties such as computer science, teacher education, and health sciences. Most participants (62.9%) reported no prior interaction with social robots, and 68.6% had never used robots for educational purposes. This diversity enabled the study to capture perspectives from both novice and moderately experienced users interacting with a public-facing robot for the first time.

4.2. Quantitative Findings

Table 1 summarises participant responses across key evaluation criteria, while Figure 3 visualises the overall distribution of positive and negative assessments.
Overall, participants expressed high levels of confidence in the robot’s physical safety, data privacy, and regulatory compliance. In particular, GDPR compliance (90.0%) and physical security (89.7%) received the strongest endorsements, indicating that the implemented safeguards and transparency measures were effective in fostering trust. Cybersecurity protections were also rated positively by the majority of participants (84.8%).
Usability and ethical behaviour were generally well received, although to a lesser extent than security-related dimensions. Accessibility received the lowest overall endorsement (60.6%), marking it as the most prominent area of concern across quantitative measures.

4.3. Exploratory Comparisons

Given the exploratory nature of the pilot study and the modest sample size, additional analyses were conducted at a descriptive level to examine potential patterns in user responses. Participants with prior experience interacting with social robots tended to report slightly higher trust and usability ratings than first-time users, although these differences were not statistically tested. Similarly, staff participants expressed marginally higher confidence in data privacy and regulatory compliance compared to students, while students were more critical of interaction pacing and interface responsiveness.
These exploratory patterns suggest that familiarity with robots and institutional context may influence user perceptions, supporting the need for adaptive interaction strategies in public deployments. However, these observations are indicative rather than conclusive and are interpreted cautiously.

4.4. Qualitative Insights

Qualitative feedback from open-ended survey responses provided additional depth and contextual understanding. Four recurring themes were identified:
  • Trust and Perceived Safety: Many participants described the robot as “safe,” “friendly,” and “non-threatening.” The robot’s appearance, stationary deployment, and bilingual interaction were frequently cited as factors contributing to these perceptions. These findings reinforce the importance of physical design cues and multimodal communication in establishing trust.
  • Usability and Interaction Flow: While the interface was generally perceived as intuitive, participants suggested improvements such as clearer navigation cues, a larger on-screen keyboard, and a more natural voice. Several respondents recommended incorporating motion or voice control to enhance engagement and reduce the perception of static interaction.
  • Accessibility and Inclusivity: Accessibility concerns emerged consistently. Participants highlighted challenges related to screen height for wheelchair users and the absence of alternative input modalities such as speech input or sign language support. These comments indicate that accessibility limitations were not incidental but systematically experienced by certain user groups.
  • Transparency in Data Practices: Although participants acknowledged GDPR compliance, several suggested that privacy assurances should be communicated more explicitly during interaction (e.g., through verbal explanations or on-screen prompts). This underscores the importance of visible and ongoing transparency rather than implicit compliance alone.
Together, these qualitative insights complement the quantitative findings by explaining why certain dimensions (particularly accessibility and interaction dynamics) received lower ratings despite strong security and privacy endorsements.

4.5. Results in Relation to the Research Questions

The findings provide empirical answers to the research questions introduced in Section 1. With respect to RQ1, participants generally perceived the robot as safe, privacy-preserving, and ethically transparent, indicating that SecuRoPS-aligned safeguards were effective from an end-user perspective. Addressing RQ2, accessibility emerged as the least supported dimension, revealing persistent inclusivity challenges in public-space robot deployment. In relation to RQ3, trust and perceived reliability were strongly associated with visible security measures, regulatory compliance, and transparent data practices, while limitations in interaction dynamics negatively affected perceptions for some users. Finally, RQ4 highlights that reliability in public deployments is multifaceted, encompassing not only technical robustness but also consistent usability and accessibility across diverse users.

4.6. Mapping Pilot Study Activities to the SecuRoPS Framework

The pilot study conducted at Østfold University College aligns with all fourteen phases of the SecuRoPS framework, as summarised in Table 2 [7]. The mapping demonstrates that the deployment addressed activities spanning analysis, design, implementation, testing, deployment, and retrospective learning. This comprehensive coverage supports the framework’s applicability as a practical guide for real-world, public-facing social robot deployments rather than a purely conceptual model.

5. Discussion

This study examined an in situ deployment of a public-facing social robot and evaluated end-user perceptions of safety, privacy, usability, accessibility, ethical transparency, and perceived reliability. The results provide empirical answers to the research questions stated in Section 1 and extend prior work on SecuRoPS by moving from conceptual guidance to real-world operationalisation [7]. Overall, participants expressed strong confidence in the robot’s physical safety, data protection, and regulatory compliance, while identifying persistent limitations related to accessibility and interaction dynamics. These findings reinforce that trustworthy public deployments are sociotechnical achievements: they depend on both technical safeguards and user-facing interaction qualities that shape trust and perceived reliability [22,30].
Importantly, the scientific contribution of this paper is not only that it validates a framework in practice, but that it (i) demonstrates a reproducible method for operationalising lifecycle governance in a real public setting, (ii) surfaces concrete empirical tensions between security assurances and inclusive access, and (iii) provides a reusable open-source deployment pattern for researchers working with proprietary social robot platforms [35]. Together, these contributions advance the state of the art by clarifying what “secure and ethical deployment” means when experienced by end users in context.

5.1. Trust, Safety, and Ethical Transparency

Trust is a cornerstone of social robot acceptance in uncontrolled public environments [30]. In this pilot, participants rated the robot positively for physical safety (80%) and ethical transparency (82.4%), suggesting that design cues (friendly appearance, bilingual communication) and protective measures (camera masking, local data handling) supported user confidence. These findings align with evidence that multimodal interfaces and approachable social cues can reduce perceived risk and increase acceptance [36,37].
However, participants’ qualitative feedback also clarifies that ethical transparency is experienced not merely as the absence of harm, but as the visibility of assurances during interaction. Even when participants acknowledged GDPR compliance, several requested explicit communication of privacy safeguards (e.g., the robot stating compliance or displaying data-use prompts). This supports calls for real-time, user-facing transparency mechanisms that complement legal compliance with interaction-level explanations [27,38]. From a systems perspective, transparency is therefore not only a governance requirement but also a functional component shaping perceived trustworthiness.

5.2. Accessibility and Inclusivity as a Persistent Sociotechnical Challenge

Accessibility received the lowest endorsement (60.6%), with participants highlighting barriers for wheelchair users, visually impaired individuals, and users needing alternative communication modalities such as speech input or sign language. These concerns suggest that accessibility is not an isolated design oversight; rather, it reflects an ongoing challenge in deploying robots as public-facing service systems. While compliance-driven measures (e.g., readable text, bilingual support) are necessary, they are insufficient for ensuring equitable access. Inclusive design requires anticipatory decisions that accommodate heterogeneous abilities and interaction preferences from the outset [39,40].
This finding has broader theoretical implications for responsible AI and public infrastructure. When robots are introduced into shared spaces such as hospitals, libraries, or transport hubs, accessibility becomes a question of system legitimacy: if some users cannot effectively engage, the deployment risks reinforcing digital exclusion. Therefore, accessibility should be treated as a lifecycle property (similar to safety and security) and evaluated continuously as environments and user groups change. The results also suggest that accessibility shortcomings can degrade perceived reliability, because system performance becomes inconsistent across different user conditions.

5.3. Privacy, GDPR, and “Privacy-by-Interaction”

The high endorsement of GDPR compliance (90%) indicates that participants appreciated data minimisation and privacy-preserving design choices (e.g., masked cameras and optional data entry). Yet, the qualitative feedback reinforces that compliance does not automatically translate into trust; participants sought clearer, more visible explanations of what data are collected, how they are stored, and why. This supports the notion of privacy-by-interaction, where privacy assurances are embedded into the robot’s interaction flow rather than communicated only through pre-study protocols [22,27].
In public deployments, where anxieties about surveillance are widespread, visible privacy practices may be essential for long-term acceptance and perceived reliability. From a governance standpoint, this suggests that privacy mechanisms should be evaluated not only as technical safeguards but also as communicative features that influence user perceptions.

5.4. User Expectations, Bias, and Cultural Narratives

Despite limited prior exposure to robots, several participants expressed concerns about surveillance and misuse of data. This indicates that user perceptions are shaped not only by direct experience but also by broader cultural narratives and media portrayals of robots and AI. Such narratives can generate baseline skepticism even when safeguards are present [41]. Addressing these expectations requires more than technical protection; it also requires public-facing communication strategies that demystify robot capabilities and clarify boundaries of data use. For responsible deployment in public spaces, trust is therefore partly a cultural and institutional phenomenon.

5.5. Interaction Dynamics and the Attention Economy

User impatience emerged as a recurring behaviour, with some participants skipping content and ignoring timers or navigation prompts. This reflects the attention economy of public environments, where interactions compete with time constraints and competing demands [42,43]. Even when a system operates securely and reliably, perceived usefulness may diminish if interactions are not concise, adaptive, and immediately relevant.
This finding suggests a design implication: public-space robots should prioritise critical information early and incorporate adaptive pacing (e.g., optional “quick paths” or interaction shortening when disengagement is detected). Such design strategies may improve perceived reliability by ensuring a consistent user experience even under limited attention conditions.

5.6. Contribution to Responsible Robotics and HRI

Compared with many HRI studies that rely primarily on lab-based simulations or expert evaluations, this work demonstrates the value of in situ, end-user validation. The deployment captured authentic behaviours, contextual expectations, and sociocultural concerns that are difficult to reproduce in controlled environments. Moreover, the study provides a reproducible operational pattern for public deployments aligned with lifecycle governance, contributing methodological clarity for future research.
A notable technical contribution is the publicly available GitHub repository containing reusable templates and implementation resources for the ARI platform [35]. This resource reduces entry barriers for researchers and educators, supports reproducibility, and enables adaptation across institutions. In this sense, the contribution extends beyond reporting results: it provides actionable artefacts that support reliable and responsible deployment practices in the broader community.

5.7. Comparison with Established Security Frameworks

Traditional frameworks such as NIST, MITRE ATT&CK, and the Cyber Kill Chain provide robust cybersecurity models for digital infrastructures [11,12,24]. However, they do not directly address embodied risks (physical safety, accessibility, public trust) that are central to robots operating in open environments. SecuRoPS extends these models by integrating stakeholder dialogue, user testing, ethical scrutiny, and iterative learning into a lifecycle approach [7]. The present study strengthens this positioning by showing how these lifecycle considerations are experienced by end users, thereby linking sociotechnical governance to real-world perceptions.

5.8. Reliability and Lessons Learned

Reliability in intelligent environments concerns not only fault-free technical operation but also consistent and dependable performance across diverse users and contexts [3,44]. In this study, technical safeguards such as containerised execution, firewall protection, VPN-enabled communication, and local data storage supported stable operation and reduced the likelihood of unplanned behaviours. These measures exemplify reliability-by-design and fault containment within the system architecture [26,33].
At the same time, reliability gaps emerged through the interaction context. Stationary deployment reduced physical risk but limited adaptivity, and some participants perceived the robot as “too static.” Accessibility barriers revealed that system performance was not experienced uniformly, which, from a reliability perspective, represents a consistency failure rather than a functional failure [18,20]. Furthermore, user impatience showed that perceived reliability can decline when interaction pacing does not match real-world attention constraints [27,29]. These lessons emphasise that reliability must be considered alongside inclusivity and interaction design in public deployments.

5.9. Limitations

This study has several limitations that bound interpretation and motivate future work. First, the study was conducted in a single institutional context (Østfold University College), which limits generalisability to other public environments such as hospitals, transport hubs, or retail settings. Second, the sample size (n = 35) is appropriate for exploratory HRI research [29], but it limits statistical inference and motivates cautious interpretation of subgroup patterns. Third, interactions were short-term and task-limited, preventing assessment of long-term trust evolution and repeated-use behaviour. Fourth, the robot was deployed as a stationary unit without autonomous navigation or expressive gestures, constraining interaction richness. Fifth, familiarity with university content may have influenced engagement, particularly for staff participants. Finally, reliance on self-reported measures introduces potential response bias. To mitigate this, responses were anonymised and triangulated with qualitative feedback; however, future studies should incorporate behavioural logs and longer deployments to strengthen evidence on sustained acceptance.

5.10. Threats to Validity and Mitigation Measures

Like all empirical studies, this work is subject to threats to validity. Anticipating and mitigating these threats is essential for responsible interpretation [45,46].
  • Internal Validity: Social desirability bias may have influenced responses, particularly because the study took place within the participants’ institution, and the researcher was present. This was mitigated through anonymous Nettskjema data collection and reminders that negative feedback was valuable.
  • Construct Validity: Concepts such as cybersecurity, privacy, and transparency may be interpreted differently by participants. Survey questions were written in plain language with clarifications distinguishing key constructs, and responses were triangulated with open-ended feedback.
  • External Validity: Results are context-dependent and based on a single deployment setting. Participant diversity improved representativeness, but replication across additional public contexts and larger samples is needed.
  • Technical Validity: The robot’s limited mobility and expressiveness may have affected perceptions of engagement and utility. These constraints were transparently reported, and participant suggestions were captured as priorities for future iterations. Technical safeguards were consistently enforced, ensuring that perceptions of privacy and security were grounded in actual system behaviour.

6. Conclusions

This paper presented an in situ, end-user study of a public-facing social robot deployed in a university reception environment, examining how secure, ethical, and user-centred design principles are experienced in practice. By operationalising selected phases of the SecuRoPS framework in a real-world setting, the study demonstrated that social robots can be perceived as safe, privacy-preserving, and ethically transparent when lifecycle governance, usability, and regulatory compliance are explicitly integrated into deployment. At the same time, the findings reveal persistent challenges related to accessibility, inclusivity, and interaction dynamics, underscoring that responsible deployment extends beyond technical safeguards to encompass equitable and consistent user experience.
A key contribution of this work lies in advancing the understanding of reliability in intelligent environments. The results show that reliability is not limited to fault-free technical operation, but is co-produced through interaction design, accessibility accommodations, and visible transparency practices. While technical measures such as containerised execution, firewall protection, and local data storage supported robust system operation, perceived reliability was diminished when interaction pacing or accessibility failed to meet the needs of diverse users. This highlights reliability as a sociotechnical property that must be evaluated across users, contexts, and conditions rather than solely through system performance metrics.
In addition to its empirical insights, the study makes a practical methodological contribution to the robotics community through a publicly available GitHub repository containing reusable templates and implementation resources for the ARI platform. By lowering technical entry barriers and supporting reproducibility, this resource enables researchers, educators, and institutions to replicate, adapt, and extend public-space robot deployments. In doing so, the paper moves beyond reporting results to providing actionable artefacts that support responsible and reliable system design in practice.
Taken together, these contributions bridge the gap between abstract ethical frameworks and lived public experience. They provide evidence-based guidance for designers, roboticists, and policymakers seeking to integrate social robots into public environments in ways that are trustworthy, inclusive, and dependable. Future research should extend this work through longitudinal deployments, richer behavioural measurements, and studies across multiple public domains such as healthcare facilities, libraries, and transport hubs. Such efforts will be essential for understanding how trust, accessibility, and reliability evolve over time, and for ensuring that social robots function not only as technical systems, but as socially embedded public agents.

Author Contributions

Conceptualization, S.O.O. and R.C.-P.; methodology, S.O.O. and R.C.-P.; software, S.O.O.; validation, S.O.O., R.C.-P. and V.G.; formal analysis, S.O.O.; investigation, S.O.O.; resources, S.O.O. and R.C.-P.; data curation, S.O.O.; writing—original draft preparation, S.O.O.; writing—review and editing, R.C.-P. and V.G.; visualization, S.O.O.; supervision, R.C.-P.; project administration, R.C.-P.; funding acquisition, R.C.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Norwegian Research Council under the SecuRoPS project “User-centred Security Framework for Social Robots in Public Spaces” with project code 321324.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Norwegian Agency for Shared Services in Education and Research (Sikt), under the Norwegian Ministry of Education and Research (protocol code 328081, approved on 23 February 2025).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study. The authors affirm that human research participants provided informed consent for the publication of their feedback and comments.

Data Availability Statement

The system implementation code is available on GitHub https://github.com/samsonoo/ARI---Ostfold-University-College-Robot-Receptionist (accessed on 8 June 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Participant Information Sheet

  • Title: Real-World Validation of a Human-Centred Social Robot: A University Receptionist Case Study
  • Invitation to Participate
You are invited to participate in a research study to validate the SecuRoPS framework from the end-users’ perspective. Your feedback will help us evaluate some phases of the framework, focusing on security, safety, usability, user experience, and ethical considerations of social robots in public spaces.
  • About the Study
In this study, you will interact with a social robot that provides information about Østfold University College, its campuses, departments, and courses, as well as updates on local events and attractions in Halden. Following your interaction with the ARI social robot, you will be asked to provide feedback on your experience, helping us test the SecuRoPS framework.
  • Your Role in the Study
You will interact with the robot and complete a feedback survey afterwards, evaluating its security, usability, ethical considerations, and user experience.
  • Voluntary Participation
Your participation is entirely voluntary. You may withdraw at any point without penalty, and your decision will not affect your relationship with Østfold University College or any affiliated institutions.
  • Duration of the Study
The total time commitment is approximately 20 min.
  • Data Collection and Confidentiality
No personal data will be collected during your interaction with the robot unless you choose to provide it. The robot will not collect any data from users through its camera or other sensors while providing information. However, the robot will ask users for their names and email addresses. You will have the option to either accept or decline this request. At the end of the interaction, the robot will review the information collected during the session and ask for your consent to save or discard this data. Please note that names and email addresses are not considered sensitive information under the data protection laws in Norway.
This data collection process will be used to assess the transparency of the social robot during interactions with users. After the interaction, the only data collected for research purposes will be your responses in the feedback survey. All collected data will be anonymized and stored securely in Nettskjema and Microsoft Office OneDrive of Østfold University College. These data will be used exclusively for the purposes of this study.
  • Benefits and Risks
Your participation will advance knowledge in the emerging field of social robotics in public spaces. There are no significant risks associated with this study.
  • Funding
This research is part of a PhD project partially funded by the Research Council of Norway under “User-centred Security Framework for Social Robots in Public Spaces (SecuRoPS)” with project code 321324.
  • Further Information and Contact
Please get in touch with any of the following if you have any questions or need additional information.
PhD Researcher: Samson Oruma, samsonoo@hiof.no
Principal Investigator—Prof. Dr. Ricardo Colomo-Palacios; ricardo.colomo-palacios@hiof.no
Chief Information Security Officer—CISO: Ted Magnus Sørlie; ted.m.sorlie@hiof.no
Thank you for considering this opportunity to shape the future of social robotics.

Appendix B. Participant Demographic Information

1.
Age:
□ Under 18, □ 15 25 , □ 26 35 , □ 36 45 , □ 46 55 , □ Over 55
2.
Gender:
□ Male, □ Female, □ Non-binary, □ Prefer not to say
3.
Faculty/Unit:
□ Faculty of Health, Welfare and Organisation
□ Faculty of Computer Science, Engineering and Economics
□ Faculty of Teacher Education and Languages
□ Norwegian Theatre Academy
□ The Norwegian National Centre for English and other Foreign Languages in Education
□ Library
□ Other (Please specify)___________________________________________________
4.
Previous Experience with Social Robots (if any):
□ None
□ Some experience
□ Extensive experience
5.
Have you interacted with robots for educational purposes before?
□ Yes, □ No

Appendix C. Research Guideline

  • Title: Real-World Validation of a Human-Centred Social Robot: A University Receptionist Case Study
  • Overview of Research
This study evaluates the SecuRoPS framework from the perspective of end-users, focusing on the framework’s ability to address concerns related to security, usability, user experience, and ethical considerations in social robots deployed in public spaces.
  • Research Procedure
1.
Introduction:
  • Greet the participant and provide the Participant Information Sheet digitally via QR code. Obtain informed consent through Nettskjema.
  • Provide a second QR code that links to the Participant Demographic Information form on Nettskjema, which participants will complete before interacting with the robot.
2.
Robot Interaction: The participant will interact with the robot, providing information about the university and local events.
3.
Post-Interaction Assessment: The participant will complete the feedback form, focusing on selected aspects of the SecuRoPS framework.
4.
Feedback Collection: Collect and anonymize the data.
5.
Data Handling: All data will be securely stored in Nettskjema and Østfold University Office 365 OneDrive, and will be used exclusively for research purposes.
  • Ethical Considerations
  • Maintain participant confidentiality.
  • Ensure transparency in the robot’s functionalities and data collection practices.

Appendix D. Post-Interaction Assessment Survey

1.
Based on your experience and interaction with the robot ARI, do you perceive physical safety as one of the main characteristics of the robot? □ Yes, □ No
If so, where?_________________________________________________________
If not, how can the physical safety of users be improved?__________________
____________________________________________________________________
2.
Based on your interaction and experience with the robot and the information provided, do you feel that your privacy is at risk due to ARI’s data handling?
□ Yes, □ No
If so, why?__________________________________________________________
If not, how do you think the robot’s data handling could be better managed?
____________________________________________________________________
3.
Assume that ARI’s design prevents access through WiFi or any other wireless means. Do you think the robot’s implementors adopted ways to prevent it from physical tampering? Take a look at ARI, including from behind. □ Yes, □ No
If so, how?__________________________________________________________
If not, how can the physical non-tampering of the robot be enhanced?______
____________________________________________________________________
4.
Based on your interaction with ARI, can you see any perceived cybersecurity vulnerabilities in the robot design that could be exploited during interaction with users? □ Yes, □ No
If so, how?__________________________________________________________
If not, which cybersecurity vulnerabilities should be addressed?___________
____________________________________________________________________
5.
Based on your experience and interaction with ARI, do you think the robot has an acceptable interface usability? □ Yes, □ No
If so, how?__________________________________________________________
If not, how can the usability of this robot be improved?___________________
____________________________________________________________________
6.
Based on your interaction with the robot, do you think it accommodates diverse user needs in terms of accessibility? □ Yes, □ No
If so, how?__________________________________________________________
If not, how can the usability of this robot be improved?___________________
____________________________________________________________________
7.
Based on your experience with the robot today, do you think ARI demonstrates ethical behaviour (e.g., giving misleading or biased information) and transparency in data collection? □ Yes, □ No
If so, how?__________________________________________________________
If not, how can compliance with ethical guidelines and transparency in data collection be improved?________________________________________________________
____________________________________________________________________
8.
Based on your interaction with the robot, do you think the robot is complying with applicable privacy laws like GDPR? □ Yes, □ No
If so, how?__________________________________________________________
If not, how can legal compliance be improved?__________________________
____________________________________________________________________

Note

1
https://sikt.no/ (accessed on 23 February 2025).

References

  1. Khan, M.U.; Erden, Z. A Systematic Review of Social Robots in Shopping Environments. Int. J. Hum.–Comput. Interact. 2024, 41, 9565–9586. [Google Scholar] [CrossRef]
  2. Lin, W.; Yeo, S.J.I. Airport Robots: Automation, Everyday Life and the Futures of Urbanism. In Artificial Intelligence and the City; Routledge: New York, NY, USA, 2023. [Google Scholar]
  3. Ragno, L.; Borboni, A.; Vannetti, F.; Amici, C.; Cusano, N. Application of Social Robots in Healthcare: Review on Characteristics, Requirements, Technical Solutions. Sensors 2023, 23, 6820. [Google Scholar] [CrossRef]
  4. Lampropoulos, G. Social Robots in Education: Current Trends and Future Perspectives. Information 2025, 16, 29. [Google Scholar] [CrossRef]
  5. Rødsethol, H.K.; Ayele, Y.Z. Social Robots in Public Space; Use Case Development. In Proceedings of the Book of Extended Abstracts for the 32nd European Safety and Reliability Conference, Dublin, Ireland, 28 August–1 September 2022; Research Publishing Services: Singapore, 2022; pp. 256–264. [Google Scholar] [CrossRef]
  6. Oruma, S.O.; Ayele, Y.Z.; Sechi, F.; Rødsethol, H. Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study. Sensors 2023, 23, 8056. [Google Scholar] [CrossRef]
  7. Oruma, S.O.; Sánchez-Gordón, M.; Gkioulos, V. Enhancing Security, Privacy, and Usability in Social Robots: A Software Development Framework. Comput. Stand. Interfaces 2026, 96, 104052. [Google Scholar] [CrossRef]
  8. Blaurock, M.; Čaić, M.; Okan, M.; Henkel, A.P. A Transdisciplinary Review and Framework of Consumer Interactions with Embodied Social Robots: Design, Delegate, and Deploy. Int. J. Consum. Stud. 2022, 46, 1877–1899. [Google Scholar] [CrossRef]
  9. Callari, T.C.; Vecellio Segate, R.; Hubbard, E.M.; Daly, A.; Lohse, N. An Ethical Framework for Human-Robot Collaboration for the Future People-Centric Manufacturing: A Collaborative Endeavour with European Subject-Matter Experts in Ethics. Technol. Soc. 2024, 78, 102680. [Google Scholar] [CrossRef]
  10. Shahriari, K.; Shahriari, M. IEEE Standard Review—Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems. In Proceedings of the 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), Toronto, ON, Canada, 20–22 July 2017; IEEE: New York, NY, USA, 2017; pp. 197–201. [Google Scholar] [CrossRef]
  11. Nist, G.M. The NIST Cybersecurity Framework 2.0; Technical Report NIST CSWP 29 ipd; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2023. [Google Scholar] [CrossRef]
  12. MITRE. MITRE ATT&CK. 2025. Available online: https://attack.mitre.org/ (accessed on 8 December 2025).
  13. Jiang, D.; Wang, H.; Lu, Y. Mastering the Complex Assembly Task with a Dual-Arm Robot: A Novel Reinforcement Learning Method. IEEE Robot. Autom. Mag. 2023, 30, 57–66. [Google Scholar] [CrossRef]
  14. Oruma, S.O. Towards a User-centred Security Framework for Social Robots in Public Spaces. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering, Oulu, Finland, 14–16 June 2023; ACM: New York, NY, USA, 2023; pp. 292–297. [Google Scholar] [CrossRef]
  15. Oruma, S.; Colomo-Palacios, R.; Gkioulos, V. Architectural Views for Social Robots in Public Spaces: Business, System, and Security Strategies. Int. J. Inf. Secur. 2025, 24, 12. [Google Scholar] [CrossRef]
  16. Deveci, M.; Pamucar, D.; Gokasar, I.; Zaidan, B.B.; Martinez, L.; Pedrycz, W. Assessing Alternatives of Including Social Robots in Urban Transport Using Fuzzy Trigonometric Operators Based Decision-Making Model. Technol. Forecast. Soc. Change 2023, 194, 122743. [Google Scholar] [CrossRef]
  17. Subero-Navarro, Á.; Pelegrín-Borondo, J.; Reinares-Lara, E.; Olarte-Pascual, C. Proposal for Modeling Social Robot Acceptance by Retail Customers: CAN Model + Technophobia. J. Retail. Consum. Serv. 2022, 64, 102813. [Google Scholar] [CrossRef]
  18. Conti, D.; Cirasa, C.; Høgsdal, H.; Di Nuovo, S.F. The Use of Social Robots in Educational Settings: Acceptance and Usability. In Social Robots in Education: How to Effectively Introduce Social Robots into Classrooms; Lampropoulos, G., Papadakis, S., Eds.; Springer: Cham, Switzerland, 2025; pp. 205–220. [Google Scholar] [CrossRef]
  19. Torras, C. Ethics of Social Robotics: Individual and Societal Concerns and Opportunities. Annu. Rev. Control Robot. Auton. Syst. 2024, 7, 1–18. [Google Scholar] [CrossRef]
  20. Stock-Homburg, R.M.; Kegel, M.M. Ethical Considerations in Customer–Robot Service Interactions: Scoping Review, Network Analysis, and Future Research Agenda. Int. J. Soc. Robot. 2025, 17, 1129–1159. [Google Scholar] [CrossRef]
  21. Hung, L.; Zhao, Y.; Alfares, H.; Shafiekhani, P. Ethical Considerations in the Use of Social Robots for Supporting Mental Health and Wellbeing in Older Adults in Long-Term Care. Front. Robot. AI 2025, 12, 1560214. [Google Scholar] [CrossRef]
  22. Callander, N.; Ramírez-Duque, A.A.; Foster, M.E. Navigating the Human-Robot Interaction Landscape. Practical Guidelines for Privacy-Conscious Social Robots. In Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–15 March 2024; ACM: New York, NY, USA, 2024; pp. 283–287. [Google Scholar] [CrossRef]
  23. Staab, R.; Jovanović, N.; Balunović, M.; Vechev, M. From Principle to Practice: Vertical Data Minimization for Machine Learning. In Proceedings of the 2024 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–23 May 2024; IEEE: New York, NY, USA, 2024; pp. 4733–4752. [Google Scholar] [CrossRef]
  24. Lockheed Martin Corporation. Cyber Kill Chain™. Online Resource. 2025. Available online: https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html (accessed on 8 December 2025).
  25. Möller, D.P.F. NIST Cybersecurity Framework and MITRE Cybersecurity Criteria. In Guide to Cybersecurity in Digital Transformation: Trends, Methods, Technologies, Applications and Best Practices; Möller, D.P., Ed.; Springer: Cham, Switzerland, 2023; pp. 231–271. [Google Scholar] [CrossRef]
  26. Shi, H.; Ying, L.; Chen, L.; Duan, H.; Liu, M.; Xue, Z. Dr. Docker: A Large-Scale Security Measurement of Docker Image Ecosystem. In Proceedings of the ACM on Web Conference, Sydney, Australia, 28 April–2 May 2025; ACM: New York, NY, USA, 2025; pp. 2813–2823. [Google Scholar] [CrossRef]
  27. Song, X.; Gu, H.; Ling, X.; Ye, W.; Li, X.; Zhu, Z. Understanding Trust and Rapport in Hotel Service Encounters: Extending the Service Robot Acceptance Model. J. Hosp. Tour. Technol. 2024, 15, 842–861. [Google Scholar] [CrossRef]
  28. Ding, B.; Li, Y.; Miah, S.; Liu, W. Customer Acceptance of Frontline Social Robots—Human-robot Interaction as Boundary Condition. Technol. Forecast. Soc. Change 2024, 199, 123035. [Google Scholar] [CrossRef]
  29. Pekçetin, T.N.; Evsen, S.; Pekçetin, S.; Acarturk, C.; Urgen, B.A. Real-World Implicit Association Task for Studying Mind Perception: Insights for Social Robotics. In Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–15 March 2024; ACM: New York, NY, USA, 2024; pp. 837–841. [Google Scholar] [CrossRef]
  30. Massaguer Gómez, G. Should We Trust Social Robots? Trust without Trustworthiness in Human-Robot Interaction. Philos. Technol. 2025, 38, 24. [Google Scholar] [CrossRef]
  31. Barfield, J. Designing Social Robots to Accommodate Diversity, Equity, and Inclusion in Human-Robot Interaction. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval, Austin, TX, USA, 19–23 March 2023; ACM: New York, NY, USA, 2023; pp. 463–466. [Google Scholar] [CrossRef]
  32. Oruma, S.O.; Sánchez-Gordón, M.; Colomo-Palacios, R.; Gkioulos, V.; Hansen, J.K. A Systematic Review on Social Robots in Public Spaces: Threat Landscape and Attack Surface. Computers 2022, 11, 181. [Google Scholar] [CrossRef]
  33. Joseph, L.; Johny, A. Robot Operating System (ROS) for Absolute Beginners: Robotics Programming Made Easy; Apress: Berkeley, CA, USA, 2022. [Google Scholar] [CrossRef]
  34. Radley-Gardner, O.; Beale, H.G.; Zimmermann, R. (Eds.) Fundamental Texts on European Private Law, 2nd ed.; Hart Publishing: London, UK, 2020. [Google Scholar] [CrossRef]
  35. Oruma, S. ARI—Ostfold-University-College-Robot-Receptionist; Østfold University College: Halden, Norway, 2025. [Google Scholar]
  36. Chang, F.; Chen, B.; Zhang, X.; Sheng, L.; Zhu, D.; Zhao, J.; Gu, Z. Crossmodal Interactions in Human-Robot Communication: Exploring the Influences of Scent and Voice Congruence on User Perceptions of Social Robots. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 26 April–1 May 2025; ACM: New York, NY, USA, 2025; pp. 1–15. [Google Scholar] [CrossRef]
  37. Cao, H.L.; Scholz, C.; De Winter, J.; Makrini, I.E.; Vanderborght, B. Investigating the Role of Multi-modal Social Cues in Human-Robot Collaboration in Industrial Settings. Int. J. Soc. Robot. 2023, 15, 1169–1179. [Google Scholar] [CrossRef]
  38. Arora, A.S.; Marshall, A.; Arora, A.; McIntyre, J.R. Virtuous Integrative Social Robotics for Ethical Governance. Discov. Artif. Intell. 2025, 5, 8. [Google Scholar] [CrossRef]
  39. Dehnert, M. Ability and Disability: Social Robots and Accessibility, Disability Justice, and the Socially Constructed Normal Body. In The De Gruyter Handbook of Robots in Society and Culture; De Gruyter: Berlin, Germany, 2024; Volume 3, p. 429. [Google Scholar]
  40. de Saille, S.; Kipnis, E.; Potter, S.; Cameron, D.; Webb, C.J.R.; Winter, P.; O’Neill, P.; Gold, R.; Halliwell, K.; Alboul, L.; et al. Improving Inclusivity in Robotics Design: An Exploration of Methods for Upstream Co-Creation. Front. Robot. AI 2022, 9, 731006. [Google Scholar] [CrossRef] [PubMed]
  41. Dorafshanian, M.; Aitsam, M.; Mejri, M.; Di Nuovo, A. Beyond Data Collection: Safeguarding User Privacy in Social Robotics. In Proceedings of the 2024 IEEE International Conference on Industrial Technology (ICIT), Bristol, UK, 25–27 March 2024; IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
  42. Damiano, L. Homes as Human–Robot Ecologies: An Epistemological Inquiry on the “Domestication” of Robots. In The Home in the Digital Age; Routledge: Abingdon, UK, 2021; p. 217. [Google Scholar]
  43. Arduengo, M.; Sentis, L. The Robot Economy: Here It Comes. Int. J. Soc. Robot. 2021, 13, 937–947. [Google Scholar] [CrossRef]
  44. Mavrogiannis, C.; Baldini, F.; Wang, A.; Zhao, D.; Trautman, P.; Steinfeld, A.; Oh, J. Core Challenges of Social Robot Navigation: A Survey. ACM Trans. Hum.-Robot Interact. 2023, 12, 1–39. [Google Scholar] [CrossRef]
  45. Seaman, C.; Hoda, R.; Feldt, R. Qualitative Research Methods in Software Engineering: Past, Present, and Future. IEEE Trans. Softw. Eng. 2025, 51, 783–788. [Google Scholar] [CrossRef]
  46. Sjøberg, D.I.K.; Bergersen, G.R. Construct Validity in Software Engineering. IEEE Trans. Softw. Eng. 2023, 49, 1374–1396. [Google Scholar] [CrossRef]
Figure 1. Overview of the research methodology followed in the study.
Figure 1. Overview of the research methodology followed in the study.
Systems 14 00137 g001
Figure 2. Branded ARI social robot utilised in this study: (a) front view, (b) back view.
Figure 2. Branded ARI social robot utilised in this study: (a) front view, (b) back view.
Systems 14 00137 g002
Figure 3. Radar chart showing positive and negative participant responses (%) across key post-interaction assessment criteria.
Figure 3. Radar chart showing positive and negative participant responses (%) across key post-interaction assessment criteria.
Systems 14 00137 g003
Table 1. Participant responses (%) to key post-interaction assessment questions.
Table 1. Participant responses (%) to key post-interaction assessment questions.
Evaluation CriteriaPositive (%)Negative (%)
Physical Safety80.020.0
Privacy Concern85.314.7
Physical Security89.710.3
Cybersecurity84.815.2
Interface Usability79.420.6
Accessibility60.639.4
Ethical Behaviour & Transparency82.417.6
GDPR Compliance90.010.0
Table 2. Mapping of Pilot Study Activities to the SecuRoPS Framework Phases and Functions.
Table 2. Mapping of Pilot Study Activities to the SecuRoPS Framework Phases and Functions.
SecuRoPS PhasesPilot Study Activity
Business Needs AssessmentIdentified the need for evaluating secure, ethical, and inclusive deployment of a social robot in an educational setting.
Evaluation of SRPS Application ContextØstfold University College was selected as a realistic and relevant public space for deployment.
Robot Type SelectionARI robot chosen based on its capabilities, compatibility with Docker, and development environment constraints.
Stakeholder Engagement and DialogueCollaboration with the university communication department and ethics committee (Sikt) to ensure alignment and compliance.
Feasibility and Impact AnalysisConsidered accessibility needs, user privacy concerns, and potential institutional benefits.
Requirement SpecificationDeveloped application requirements for multilingual support, audio/visual interaction, secure data handling, and usability.
Risk Assessment & Threat ModellingSecurity risks mitigated through covered ports, VPN/firewall, and stationary operation. Data minimization enforced through design.
Proven Methodology-Driven DesignFollowed PAL Robotics SDK implementation methods and guidelines for structured application design.
Implementation of Security, Safety, and User-Centeredness MeasuresBuilt secure HTML/CSS/JS-based UI; implemented overlay navigation, visual/audio aids, and GDPR-aligned data collection process.
User Experience, Usability, and Security TestingPost-interaction surveys and direct observation provided feedback on trust, safety, and accessibility.
Ethical, Legal, and Regulatory ScrutinyReceived formal approval from Sikt; data handling compliant with Norwegian data protection laws.
Strategic DeploymentRobot deployed in a public-facing location at the university with pre-programmed interactive content.
Continuous Monitoring and Iterative ImprovementOn-site observations and usability feedback recorded for future design improvements.
Retrospective and Lessons LearnedReflections incorporated into recommendations and design templates published with the study.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oruma, S.O.; Colomo-Palacios, R.; Gkioulos, V. From Framework to Reliable Practice: End-User Perspectives on Social Robots in Public Spaces. Systems 2026, 14, 137. https://doi.org/10.3390/systems14020137

AMA Style

Oruma SO, Colomo-Palacios R, Gkioulos V. From Framework to Reliable Practice: End-User Perspectives on Social Robots in Public Spaces. Systems. 2026; 14(2):137. https://doi.org/10.3390/systems14020137

Chicago/Turabian Style

Oruma, Samson Ogheneovo, Ricardo Colomo-Palacios, and Vasileios Gkioulos. 2026. "From Framework to Reliable Practice: End-User Perspectives on Social Robots in Public Spaces" Systems 14, no. 2: 137. https://doi.org/10.3390/systems14020137

APA Style

Oruma, S. O., Colomo-Palacios, R., & Gkioulos, V. (2026). From Framework to Reliable Practice: End-User Perspectives on Social Robots in Public Spaces. Systems, 14(2), 137. https://doi.org/10.3390/systems14020137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop