Next Article in Journal
Knit-Edit: A Unified Multi-Task Editing Framework for Knitted Garments
Previous Article in Journal
Cognitively Diverse Multiple-Choice Question Generation: A Hybrid Multi-Agent Framework with Large Language Models
Previous Article in Special Issue
A Combined Impedance and Optimization-Based Nonlinear MPC Approach for Stable Humanoid Locomotion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

BCI-Inspired Adaptive Agents in Human–Robot Interaction: A Structural Framework for Coordinated Interaction Design

by
Ionica Oncioiu
1,2,*,
Iustin Priescu
1,
Daniela Joița
1,
Geanina Silviana Banu
1 and
Cătălina-Mihaela Priescu
3
1
Department of Informatics, Faculty of Informatics, Titu Maiorescu University, 040051 Bucharest, Romania
2
Academy of Romanian Scientists, 3 Ilfov, 050044 Bucharest, Romania
3
Faculty of Automatic Control and Computers, Politehnica University of Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(6), 1206; https://doi.org/10.3390/electronics15061206
Submission received: 23 February 2026 / Revised: 10 March 2026 / Accepted: 12 March 2026 / Published: 13 March 2026
(This article belongs to the Special Issue Human Robot Interaction: Techniques, Applications, and Future Trends)

Abstract

The accelerated integration of intelligent agents in user-centered digital environments has intensified research in the field of Human–Robot Interaction, especially regarding mechanisms for adaptive, intuitive, and cognitively aligned communication. The present study develops and empirically examines a structural model of BCI-inspired adaptive agents designed to support coordinated interaction in HRI contexts. The study analyzes users’ perceptions of standardized hypothetical interaction scenarios involving BCI-inspired adaptive digital agents, where BCI inspiration is conceptual and refers to adaptive architectures interpreting behavioral cues rather than direct neural signal acquisition. The proposed model integrates four main constructs—perceived technological innovation, user involvement, agent adaptivity, and digital synergy—and examines their associations with user satisfaction in digital collaborative environments. Data were collected through an anonymous questionnaire (N = 268) and analyzed using structural equation modeling with the PLS-SEM method. The structural model demonstrates substantial explanatory power, accounting for 66.8% of the variance in user satisfaction (R2 = 0.668). The study contributes by empirically supporting a scenario-based structural evaluation framework suitable for early-stage adaptive HRI system design. The results highlight the role of digital synergy in aligning innovation, engagement, and adaptive behavior in BCI-inspired adaptive HRI systems, providing directions for the design of adaptive robotic agents oriented toward coordinated interaction, user-centered integration, and responsible use in collaborative digital ecosystems.

1. Introduction

The rapid evolution of intelligent systems and their integration into complex digital ecosystems have fundamentally transformed the paradigm of human–machine interaction [1,2,3]. In the field of Human–Robot Interaction (HRI), contemporary research goes beyond approaches focused exclusively on control and command, moving towards models of adaptive collaboration, co-regulation, and assisted autonomy [1,4,5]. The development of virtual environments, mixed reality, and distributed intelligent agents requires the design of entities capable of operating in a regime of contextual adaptation and behavioral anticipation. In this framework, agents conceptually inspired by brain–computer interface principles (BCI) are emerging as a significant direction in the design of HRI systems oriented towards cognitive coordination and dynamic human–system integration [6,7]. In this study, the notion of BCI-inspired agents does not refer to the use of neural signal acquisition technologies such as EEG or invasive neural interfaces. Instead, the concept reflects an architectural analogy in which digital agents emulate intention interpretation and adaptive feedback loops using behavioral and contextual interaction signals. In contemporary HRI architectures, digital agents are conceptualized as collaborative entities integrated into complex socio-technical systems [8,9]. Human–agent interaction becomes a bidirectional process of information exchange, characterized by continuous adaptation, contextual feedback, and mutual adjustment [10,11]. The HRI literature emphasizes the importance of perceived adaptability, behavioral coherence, and systemic integration in building trust and acceptance of technology [3,12,13]. Research in social robotics and adaptive autonomy also highlights the role of the balance between human control and system autonomy in building trust and sustainable cooperation [14,15,16]. The quality of interaction is no longer evaluated exclusively by functional efficiency but by the system’s ability to support coordination, predictability, and transparency in dynamic contexts.
The interest in adaptive agents is driven by the need to develop systems capable of operating in a human–AI collaborative regime, characterized by partial autonomy and dynamic adjustment [17,18,19]. The frequency of interactions with intelligent technologies requires the design of communication mechanisms that allow for contextual tuning, behavioral synchronization, and cognitive customization. In modern HRI architectures, agents can fulfill multiple roles, from cognitive assistants to facilitators of collaboration in virtual environments, contributing to optimizing performance and improving the interactive experience [20,21,22].
The analysis of human–agent interaction requires the integration of cognitive, behavioral, and social perspectives on decision-making processes in complex technological environments. Theoretical models in the field of Human–Robot Interaction highlight the role of user engagement, perception of control, system transparency, and adaptive assessment in determining the acceptance and continued use of intelligent agents [1,3,11,13]. In this context, the design of BCI-inspired adaptive agents must be based on principles of cognitive compatibility, balance between autonomy and human control, and behavioral coherence to support sustainable human–system collaboration.
In the conceptual framework of this research, agents are defined as systems architecturally inspired by the principles of BCI, by emulating the mechanisms of intention interpretation and contextual adjustment. The study does not involve neurophysiological measurements and it does not involve the direct use of BCI devices. BCI inspiration is used here in a conceptual and architectural sense to describe systems capable of cognitive alignment and adaptive recalibration based on behavioral and contextual cues. In the present framework, references to intention interpretation denote perceived behavioral alignment inferred from contextual and interactional signals rather than direct neurophysiological decoding. The BCI inspiration is therefore architectural and metaphorical, not biological or signal-based.
Beyond traditional EEG-based interfaces, recent research has explored alternative non-invasive neuro-modulation approaches, including ultrasound-based neural stimulation and emerging techniques such as sonogenetics, which aim to enable targeted neural activation without invasive procedures. Although these technologies remain primarily within experimental neuroscience, they illustrate the broader trajectory of human–machine communication research toward more direct forms of cognitive interaction. In this context, the concept of BCI-inspired adaptive agents represents an architectural abstraction of these developments, translating neuro-adaptive principles into digital interaction design.
Despite the growing body of research on brain–computer interfaces, adaptive AI systems, and human–robot interaction, these streams of literature largely evolve in parallel rather than within an integrated explanatory framework. Existing studies typically examine BCI architectures from a neurotechnological perspective, adaptive agents from an artificial intelligence standpoint, and HRI from a behavioral or interactional viewpoint, without structurally connecting these dimensions in a unified predictive model. Moreover, there is a lack of empirically validated structural models that examine how BCI-inspired adaptive mechanisms translate into evaluative outcomes within collaborative digital ecosystems. This theoretical and empirical fragmentation limits our understanding of how BCI-inspired adaptive coordination mechanisms influence user satisfaction in advanced HRI environments.
The main objective of the research is to develop and empirically examine a conceptual model explaining user satisfaction in interactions with BCI-inspired adaptive agents in HRI contexts. To achieve this objective, the following research questions are formulated:
  • RQ1: How can BCI-inspired adaptive agents be conceptually defined and integrated within existing models of human–robot interaction?
  • RQ2: How do perceived innovation, user engagement, and agent adaptivity influence the formation of digital synergy in HRIs?
  • RQ3: To what extent does digital synergy mediate the relationship between agent characteristics and user satisfaction in adaptive collaborative environments?
The originality of the research lies in the articulation of an integrated structural model that connects the principles inspired by brain–computer interfaces (BCIs) with the theory of adaptive collaboration in HRI. While the existing literature treats BCI systems, adaptive intelligent agents, and human–robot interaction separately, the present study proposes an integrative approach that models the relationships between the perceived characteristics of the agent and the quality of digital collaboration. The major theoretical contribution consists of the operationalization of digital synergy as an explanatory mechanism of adaptive coordination in HRI and in the empirical validation of a model that correlates perceived innovation, user engagement, and agent adaptability with satisfaction in collaborative interactions. Thus, the research extends the HRI literature by introducing a BCI-inspired adaptive coordination perspective applied to digital agents.
The BCI-inspired adaptive agent is defined not only as an advanced digital interface but also as a semi-autonomous collaborative entity integrated into an HRI ecosystem. The proposed conceptualization goes beyond the paradigm of traditional reactive interfaces and positions it as a cognitive mediator between the user and the interconnected digital infrastructure. Through the capacity for contextual adaptation and functional coordination, the agent functions as an active node in the collaborative human–system architecture, facilitating processes of co-adaptation and mutual regulation. The theoretical perspective thus formulated is relevant for the evolution of virtual robotic systems and mixed reality environments, where effective interaction involves the dynamic integration of human behavior and adaptive algorithmic mechanisms.
This study also contributes to the HRI literature in three complementary directions. First, it extends the theoretical framework of HRI by introducing the notion of a BCI-inspired adaptive agent, integrated into a coherent relational model. Second, it provides a rigorous empirical validation using structural equation modeling (PLS-SEM), highlighting the structural relationships between the proposed constructs. Third, it provides implications for the design of user-centered adaptive systems, highlighting the importance of contextual coordination, systemic integration, and functional transparency in the development of HRI agents.
The structure of the article is as follows: Section 2 describes the theoretical foundations and research hypotheses, Section 3 details the methodology and data analysis, Section 4 presents the results, and Section 5 discusses theoretical and managerial implications, followed by conclusions and future research directions

2. Theoretical Background and Hypothesis Development

The integration of BCI-inspired adaptive digital agents indicates a significant transition in the architecture of human–system interaction in digital environments [6,23,24]. Through the perceived ability to interpret contextual signals associated with user intentions and states, such systems configure forms of interaction characterized by contextual adaptation and dynamic personalization, going beyond the paradigm of conventional interfaces based on explicit input. The analysis of the adoption and use of such technological configurations requires an interdisciplinary approach that integrates perspectives from human–robot interaction, cognitive sciences, and artificial intelligence to explain how digital agents become functional components of contemporary socio-technical ecosystems [8,16,25]. In conventional implementations, brain–computer interfaces establish communication channels between neural activity and digital systems by converting electrophysiological patterns into algorithmically processable signals. In BCI-inspired configurations, digital agents are designed to interpret cues associated with intention and trigger actions without traditional mechanical intermediation [6,26]. Compared to voice- or gesture-based interfaces, BCI-inspired models suggest a more direct integration into the human–system cognitive loop, facilitating rapid coordination and contextual adjustment.
A distinctive dimension of BCI-inspired architectures is the integration of parameters associated with algorithmically inferred affective states. In neurotechnological applications, this allows for the adjustment of the agent’s digital expressiveness, conversational tone, and contextual behavior [7,27]. From an HRI perspective, affective adaptation supports the perception of interactional coherence and promotes the social acceptance of digital agents in collaborative roles [1,21].
Interaction with agents inspired by brain–computer interfaces involves cognitive, affective, and computational processes in a dynamic interdependent relationship [28]. The coordination between user intent, algorithmic interpretation mechanisms, and the agent’s behavioral adjustment generates an interactional framework that goes beyond the logic of simple reaction to a command. The literature on human–AI collaboration suggests that the user experience emerging from such configurations is the result of a productive tension between perceived technological innovation, active user engagement, and the level of functional integration into the digital ecosystem [26,29,30]. The intersection of these dimensions outlines the evaluative architecture through which users attribute value to the interaction, and the proposed model is based on this conceptual structure.
The research model integrates complementary theoretical frameworks to explain the mechanisms by which BCI-inspired adaptive digital agents are adopted, evaluated, and integrated into human–system interaction architectures. The Diffusion of Innovation (DOI) theory [31] provides an analytical framework for understanding variations in the adoption of emerging technologies through the constructs of relative advantage, compatibility, and perceived complexity. In parallel, the Social Exchange Theory [32] underpins the relational dimension of interaction, explaining how digital agents can be perceived as social entities in contexts of reciprocity and technologically mediated interpersonal evaluation. Together, these perspectives allow modeling the interaction between the agent’s technological characteristics and user evaluations within a coherent framework specific to the field of Human–Robot Interaction.
Perceived Digital Agent Innovation (PDAI) and perceived digital agent adaptive capacity (DAAC) represent analytically distinct dimensions within the proposed framework. Perceived innovation captures users’ evaluation of technological advancement, architectural sophistication, and systemic novelty. In contrast, adaptive capacity reflects the perceived behavioral recalibration of the agent across interaction episodes in response to evolving contextual cues [8,13]. Innovation signals technological progress, whereas adaptivity signals dynamic alignment over time [19,33]. This distinction reduces the risk of conceptual overlap between structural advancement and behavioral adjustment mechanisms.
BCI-inspired adaptive digital agents do not operate exclusively on the basis of static response rules but integrate behavioral update mechanisms derived from interaction history [34]. Algorithmic learning processes allow for the progressive adjustment of conversational and functional strategies, generating forms of contextual personalization and interactional coherence [25]. As a result, user perception can shift from evaluating the agent as a simple technological interface to recognizing a collaborative entity capable of adaptive regulation in real time. In BCI-inspired configurations, the integration of signals associated with the user’s intention and state amplifies this dynamic, strengthening co-adaptation processes and stabilizing the human–system relationship within a continuous interactive loop [35,36].
The Technology Acceptance Model (TAM) provides an explanatory framework for understanding the cognitive determinants of usage intention, in particular through the constructs of perceived usefulness and ease of use [37]. In the context of BCI-inspired adaptive digital agents, these dimensions contribute to shaping initial evaluations and readiness for ongoing interaction. In addition, the distributed intelligence perspective [38] supports the idea that systems can change how they act by learning and optimizing algorithms over and over again. In adaptive architectures, the accumulation and integration of data from successive interactions leads to refinement of agent performance and increased functional coherence within the human–system loop.
By integrating these theoretical perspectives, the proposed conceptual model articulates a coherent explanatory framework regarding the integration of neuro-adaptive digital agents into collaborative human–system interactions. The structure of the model highlights both the determinants of user acceptance and involvement and the mechanisms through which these variables contribute to the configuration of an adaptive digital experience.

2.1. Perceived Digital Agent Innovation (PDAI)

Technological innovation constitutes a central determinant of user perception in human–system interaction contexts [26,39]. According to DOI theory, the adoption of emerging technologies depends on evaluations of relative advantage, compatibility with existing practices, and perceived complexity [40]. Within advanced HRI architectures, digital agents integrating artificial intelligence, natural language processing, and BCI-inspired mechanisms are interpreted as technologically sophisticated systems characterized by architectural advancement and systemic novelty. In this framework, perceived innovation refers to the evaluation of technological progression and structural sophistication rather than to the system’s behavioral recalibration over time.
Extant literature on technology acceptance indicates that perceived innovation significantly influences continuance intention and overall evaluation of interactive experiences [15,41]. Digital agents perceived as innovative expand the interactional repertoire by signaling technological advancement, integrative functionality, and forward-oriented system design [11,25,42]. Under such conditions, users associate innovation with enhanced efficiency, operational flexibility, and superior functional value, which collectively strengthen satisfaction judgments.
Research in human–machine interaction indicates that users evaluate favorably systems that exhibit technological sophistication and integrative functionality within the interaction environment [43]. Agents embedding advanced data-processing architectures and algorithmic personalization capabilities are perceived as structurally coherent and operationally robust [11,16]. This perceived technological congruence enhances predictability and reduces perceived interaction effort, thereby reinforcing positive system evaluations and increasing user satisfaction.
Empirical evidence confirms a significant association between perceived innovation and user satisfaction [30,44]. When digital agents are evaluated as technologically advanced and architecturally sophisticated [45], users attribute greater functional and experiential value to the interaction. Conversely, systems perceived as technologically conventional or structurally limited tend to generate weaker evaluative outcomes, resulting in lower satisfaction levels.
Beyond functional performance, perceived innovation shapes how users cognitively frame and interpret their interaction with a digital agent. Architectures characterized by advanced processing capabilities and systemic integration influence not only response sophistication but also perceptions of usability and cognitive economy [24,43]. Agents perceived as technologically coherent and structurally integrated are evaluated as more intuitive and accessible, thereby reducing mental load and minimizing experiential fragmentation relative to traditional interface models [19,22,46].
When technological advancement is perceived as fluidly embedded within the interaction architecture, overall system evaluation is strengthened, leading to higher satisfaction judgments. Consequently, the perceived level of innovation of digital agents is theorized as a direct determinant of user satisfaction in HRI contexts, as technological sophistication enhances both functional utility and experiential appraisal.
Hypothesis 1 (H1). 
Perceived digital agent innovation is positively associated with user satisfaction.
Technological innovation, although important for the differentiation and evolution of intelligent systems, does not automatically guarantee a satisfactory digital experience [15,17]. The real impact of innovation on the user depends on how advanced functionalities are integrated into a coherent interactive architecture, capable of supporting continuity of experience and contextual coordination. The DOI theory also emphasizes that the adoption of a technology is influenced not only by its relative advantage but also by its compatibility with the user’s practices and the degree to which it naturally integrates into the flow of everyday activities. Research in the field of human–machine interaction indicates that the perception of innovation is strengthened when the system facilitates functional connectivity, conversational coherence, and systemic integration [21]. In such a framework, digital synergy can be understood as an emergent process resulting from the harmonization between the technological capabilities of the agent and the interactional dynamics of the user. It is not the mere presence of advanced technology that determines satisfaction, but the way in which it supports a fluid, predictable, and mutually adaptive exchange [47]. Therefore, digital synergy configures the context in which innovation is transformed from technical potential into experiential value.
Therefore, the relationship between perceived innovation and user satisfaction is influenced by the way in which technological capabilities are integrated into a coherent interactive framework [10,18]. Digital synergy reflects the level of coordination and functional interconnection between the user and the digital agent, configuring the context in which innovation is translated into perceived experience [33,48]. Digital agents that integrate advanced functionalities and efficient connectivity mechanisms are more likely to generate a high level of digital synergy. Based on the theoretical argumentation presented, the following hypothesis is formulated:
Hypothesis 2 (H2). 
Perceived digital agent innovation positively contributes to digital synergy.

2.2. User Engagement (UED)

The degree of user engagement in a digital interaction process is a central dimension in explaining the evaluation of the experience and the continuation of technology use. In the TAM, usage behavior is influenced by the perception of usefulness and ease of use, but subsequent literature has highlighted that active involvement in the interaction reinforces operational familiarity and cognitive trust in the system. High involvement amplifies information processing and favors the internalization of the experience, contributing to more stable and positive evaluations of technological performance.
The theoretical perspective on human–machine interaction highlights that active participation in the information exchange with a digital agent intensifies the perception of value and strengthens the intention to continue use [9,21]. Engagement is not limited to the frequency of interaction but reflects the level of cognitive and affective engagement in the collaborative process. Digital agents capable of supporting coherent dialogue, adaptive response, and contextual continuity stimulate deeper engagement, which favors higher evaluations of the experience and, implicitly, an increased level of satisfaction.
Active involvement in the interactional process contributes to the consolidation of the digital experience and reduces the likelihood of discontinuation of use [34,42]. High levels of cognitive and behavioral engagement are associated with more favorable evaluations of the system and with an increased intention to continue use. Empirical evidence indicates that user engagement is a robust predictor of continued use and loyalty to the technology [25,45]. In HRI contexts, active interaction with adaptive digital agents favors the internalization of the experience and the consolidation of overall satisfaction. Based on the discussion, the following hypothesis is developed:
Hypothesis 3 (H3). 
User engagement is positively associated with user satisfaction.
The degree of user engagement is also a central variable in explaining the mechanisms by which digital interaction is transformed into experiential evaluation. In HRI contexts, engagement is not limited to frequency of use but reflects the level of cognitive engagement, sustained attention, and availability to co-construct the interaction [3,5]. However, the relationship between engagement and satisfaction does not operate in a structural vacuum. The efficiency of transforming engagement into positive evaluation depends on the quality of the interactive environment and the digital agent’s ability to support mutual coordination, contextual adaptation, and conversational coherence [43].
Approaches in the field of conversational interfaces and adaptive systems show that engagement becomes experientially meaningful when users can actively intervene in the configuration of the interaction, express their preferences, and observe systemic adjustments congruent with the input provided [46,49]. Under such conditions, engagement is no longer just a usage behavior but a process of human–system co-regulation, which reinforces the perception of control and interactional coherence, favoring the emergence of satisfaction.
Functional connectivity between the user and the digital agent influences the structure of the interactive experience through the way it supports the continuity of information exchange and the integration of contextual feedback [45]. In HRI architectures, the mere presence of communication channels is not enough; experiential relevance depends on the degree to which the system facilitates interactional alignment and mutual adjustment of behaviors [28,36]. In this sense, connectivity takes on an operational dimension, configuring the framework in which user engagement can be transformed into a process of effective human–system coordination.
Digital synergy can thus be conceptualized as an expression of the dynamic integration between user engagement and the system’s capacity to support adaptive reciprocity. When the interactive environment allows for the expression of intentions, the progressive integration of preferences, and behavioral recalibration in real time, engagement does not remain a simple indicator of usage but evolves into a mechanism of behavioral co-evolution. In such a framework, engagement directly contributes to the consolidation of digital synergy, reflected in operational coherence and increased functional coordination. Considering the discussion, the following hypothesis is established:
Hypothesis 4 (H4). 
User engagement positively contributes to digital synergy.

2.3. Digital Agent Adaptive Capacity (DAAC)

Adaptivity is a fundamental dimension of intelligent systems integrated in HRI contexts, as it allows behavioral adjustment according to interactional dynamics and user characteristics. From the perspective of Distributed Intelligence Theory [38], the performance of a system is not determined exclusively by its internal capabilities but by the way it integrates information from interactions to support continuous coordination and optimization. Digital agents capable of incorporating user feedback and recalibrating their behavior in real time are evaluated as more efficient and relevant to situational needs.
Contemporary approaches in human–robot interaction highlight that the perception of technological efficiency is strengthened when users observe systemic adjustments congruent with their preferences and behaviors [24,45]. The capacity for continuous learning and contextual personalization reduces interactional discontinuities and favors the formation of a coherent experience. To the extent that adaptation is perceived as predictable, transparent, and focused on optimizing collaboration, the overall evaluation of the system tends to be more favorable. In light of the discussion, the following hypothesis is developed:
Hypothesis 5 (H5). 
Perceived digital agent adaptive capacity is positively associated with user satisfaction.
Studies on adaptive systems indicate that sustained use of a technology is associated with its ability to provide contextual personalization and behavioral adjustment based on learning from previous interactions [8,12,17]. Digital agents that can anticipate user preferences and recalibrate their behavior based on interactional dynamics are evaluated as more intuitive and efficient from the perspective of human–system coordination [21]. However, the value of adaptivity does not derive exclusively from algorithmic performance but from the way in which adjustments are perceived as relevant and congruent with user intentions [43]. Adaptation becomes experientially meaningful only to the extent that it supports perceptions of control, predictability, and interactional coherence.
Furthermore, integrating adaptivity into a framework characterized by functional connectivity and interactive continuity fosters deep cognitive engagement [22,25]. In the absence of an architecture that allows for the smooth integration of systemic adjustments, adaptive mechanisms may remain opaque or underutilized, diminishing their impact on the overall evaluation of the experience [34,39,48]. Therefore, the effectiveness of adaptivity is closely linked to the system’s ability to support mutual coordination and coherent operational integration. On the other hand, BCI-inspired digital agents, characterized by contextual adjustment and continuous behavioral recalibration, can support advanced forms of human–system coordination when adaptive mechanisms are integrated into a stable and predictable interactive infrastructure [6,23]. In the absence of such integration, adaptivity remains a latent technological attribute, without systemic impact on collaborative dynamics. In contrast, when system adjustments are perceived as facilitating interactional continuity and experiential coherence, the level of functional integration between the user and the digital ecosystem is strengthened. In this logic, adaptivity becomes a structural antecedent of digital synergy, as it supports the emergence of fluid and mutually adjusted collaboration in HRI contexts.
Hypothesis 6 (H6). 
Perceived digital agent adaptive capacity positively contributes to digital synergy.

2.4. The Role of Digital Synergy in Generating User Satisfaction

The conceptualization of digital synergy has evolved with the acceleration of digitalization and the integration of intelligent systems into complex socio-technical architectures [5,14]. In the early stages of digital transformation, synergy was predominantly associated with technological interoperability and the ability of platforms to facilitate the efficient exchange of data between systems and users [17]. Early research on digital interaction mainly addressed the operational dimension of technological coordination, focusing on how functional integration could optimize the performance and efficiency of processes [15,16].
In contemporary digital ecosystems, characterized by distributed infrastructures, real-time processing, and algorithmic intelligence, digital synergy goes beyond simple technical interconnectivity [20,25]. The integration of technologies such as cloud computing, the Internet of Things, blockchain, or augmented reality configures adaptive environments in which data, intelligent agents, and users are connected in a dynamic network of information co-regulation. In such architectures, value does not derive exclusively from the transfer of information but from the system’s ability to support contextual coordination and mutual adjustment between human and algorithmic components.
In this framework, interactive digital agents function as integration interfaces between the user and the extended digital infrastructure [50]. Digital synergy reflects the degree to which the interaction is perceived as coherent, continuous, and adaptive, facilitating a unified experience in which the user’s cognitive processes and the system’s decision-making mechanisms are aligned. As algorithmic sophistication increases, digital agents are evaluated less as tools and more as collaborative entities capable of supporting everyday activities and decision-making processes in a contextual and integrated manner.
Conceptually, digital synergy is distinct from satisfaction. While satisfaction represents an overall evaluative judgment of the interaction experience, digital synergy captures the perceived structural alignment between user behavior, adaptive system responses, and infrastructural integration. The former reflects affective evaluation, whereas the latter reflects systemic coordination.
User experience studies indicate that systems capable of supporting a coherent, continuous, and contextualized conversational flow are evaluated as more efficient, credible, and interactively engaging [8,43]. Digital agents that integrate context-maintaining mechanisms, intention anticipation, and adaptive personalization are perceived as offering a higher level of functional utility and situational relevance. Research on conversational artificial intelligence highlights that positive perceptions are amplified when the system goes beyond reacting to explicit requests and demonstrates the ability to anticipate needs and dynamically adjust the interaction [45].
In this conceptual framework, digital synergy describes the degree of alignment between the user’s cognitive processes and the operational dynamics of the system. An interaction characterized by coherence, continuity, and contextual adaptation reduces experiential dissonance and reinforces positive evaluations of system performance. Empirically, the distinctiveness of the digital synergy construct was verified through discriminant validity analysis using both the HTMT ratio and the Fornell–Larcker criterion, confirming that the construct captures a distinct coordination dimension rather than overlapping with engagement or adaptive capability.
The literature on trust and acceptance of technology suggests that the perception of a smooth coordination between the user and the system favors the formation of a higher overall evaluation, reflected in increased satisfaction and intention to continue use [42]. As a result of the discussion, the following hypothesis is introduced:
Hypothesis 7 (H7). 
Digital synergy is positively associated with user satisfaction.
The hypothesized relationships are structured as direct effects of the predictive variables on both user satisfaction and digital synergy, and digital synergy is modeled as a direct determinant of satisfaction. Such a structural architecture, shown in Figure 1, allows for the simultaneous assessment of technological and behavioral effects on user experience, as well as examining the role of digital synergy as an explanatory mechanism within human–system interaction.

3. Research Methodology

The methodological framework of the research was designed to ensure a rigorous correlation between the theoretical foundations of the conceptual model and its empirical validation. The methodological structure integrates theoretical reasoning with quantitative analysis, allowing for the systematic assessment of the relationships between perceived innovation, user engagement, the adaptive capacity of digital agents, digital synergy and user satisfaction in human–system interaction contexts inspired by BCI. The research design combines conceptual modeling, operationalization of latent constructs, data collection through a standardized instrument and statistical validation using the PLS-SEM technique, suitable for the analysis of complex structural explanatory models.

3.1. Methodological Framework and Conceptual Model Development

The main methodological objective of the research is to develop and validate an integrated conceptual model that explains the relationships between the perceived characteristics of BCI-inspired digital agents and user satisfaction in human–robot interaction contexts. The conceptual model includes three technological and behavioral antecedents (PDAI, UED, and DAAC) as well as the integrative construct of DS, which contributes to explaining US. The analysis aims to highlight how these dimensions structurally interact in configuring the evaluation of the digital experience.
To ensure the relevance and consistency of the evaluation context, participants were presented with standardized scenarios describing hypothetical interactions with BCI-inspired digital agents capable of contextual adaptation and behavioral adjustment. The scenarios were built to simulate realistic use cases, such as virtual assistance in support services, communication in virtual reality environments, digital counseling based on artificial intelligence, and collaboration in augmented educational spaces. The approach used allows the evaluation of user perceptions without the involvement of physiological devices or experimental interventions.
Scenario-based research designs are frequently employed in the study of emerging technologies when direct system implementation is impractical, technologically premature, or ethically constrained. Such an approach enables controlled exposure to standardized interaction configurations, thereby minimizing extraneous variance while preserving conceptual realism. In early-stage theoretical modeling, scenario-based methodologies provide a structured environment for evaluating perceptual and evaluative mechanisms without introducing the confounding effects associated with fully operational experimental systems.
The selection of the PLS-SEM method was based on the complexity of the conceptual model, which includes direct relationships between latent variables and allows the examination of structural configurations compatible with subsequent interpretations of indirect effects. The PLS-SEM method is suitable for moderate-sized samples and exploratory designs, providing stable estimates even under conditions of non-normal data distributions [51]. Preliminary diagnostics indicated deviations from multivariate normality, further supporting the suitability of PLS-SEM.
Compared to covariance-based models (CB-SEM), this technique allows for the simultaneous analysis of measurement structure and structural relationships, making it suitable for testing hypotheses regarding the influence of perceived characteristics of digital agents and interactional integration mechanisms on user satisfaction. The method also provides predictive relevance indicators (Q2) and allows for the verification of mediated effects, which are difficult to estimate through other statistical procedures.
The interaction scenarios were developed based on recent literature in the field of human–robot interaction and human–AI interaction [4,21,24], which highlights that the evaluation of autonomous digital agents is not limited to the appreciation of immediate functionality but involves complex evaluative processes regarding perceived usefulness, behavioral coherence, adaptability, and contextual integration into the flow of user activities. In such configurations, users formulate synthetic judgments on the system’s performance by referring to the degree of interactional coordination, dialogue continuity, and congruence between their intentions and the agent’s responses. Such a structuring allows for the investigation of relationships between variables in a context realistic enough to support external validity, without involving the actual use of BCI devices or experimental interventions on participants.
The methodology relies on empirically validated tools and theoretical frameworks established in human–robot interaction and digital technology acceptance research, ensuring coherence between the conceptual foundations of the model and the operationalization of latent constructs. The methodological structure allows for the systematic examination of the relationships between perceived innovation, user engagement, digital agent adaptability, digital synergy, and user satisfaction within a unified and rigorous analytical framework.
The testing of the conceptual model focused on digital interaction scenarios relevant to contemporary users, such as intelligent assistance, automated support, collaboration in virtual environments, and the use of digital agents in educational or informational contexts. The selection of these contexts is justified by the increased frequency of interactions with intelligent systems and their potential to simultaneously activate assessments of technological innovation, the level of interactional involvement, the perception of adaptability, and the degree of functional integration into the digital ecosystem. In this way, the analysis remains focused on the explanatory mechanisms proposed in the model, without involving the actual use of BCI devices or experimental interventions on participants.

3.2. Instrument Construction and Definition of Latent Variables

The theoretical variables included in the proposed model were defined and operationalized in accordance with the literature in the field of human–robot interaction and the acceptance of digital technologies in interactive environments. The selection of constructs was based on empirical and conceptual research in the fields of artificial intelligence, adaptive systems, and user behavior in digital contexts, which highlights the relationships between the perceived characteristics of intelligent systems and the evaluation of the interaction experience. The theoretical model includes five latent variables (PDAI, UED, DAAC, DS, US) which reflect the main explanatory dimensions of the interactive experience in contexts inspired by BCI.
Each variable was defined starting from scales previously validated in research dedicated to human–technology interaction and adaptive intelligent systems, being adapted to the context of digital agents inspired by BCI. The measurement items were adapted from previously validated scales in the literature on technology acceptance, human–computer interaction, and adaptive intelligent systems. The wording of the items was adjusted to reflect the specific context of BCI-inspired adaptive digital agents. The adaptation process involved a semantic consistency review performed by three experts in human–technology interaction and digital communication, who evaluated the conceptual alignment and clarity of each item. Based on their feedback, minor wording adjustments were implemented to improve clarity and contextual relevance before the pilot testing phase.
PDAI refers to users’ assessment of the degree of novelty, technological sophistication, and functional integration of the digital agent, including elements such as machine learning, algorithmic personalization, and contextual adjustment. UED reflects the level of cognitive engagement and active participation in the interaction process, describing the perceived intensity, relevance, and continuity of the interaction with the digital agent. DAAC expresses the extent to which the system is perceived as being able to learn from interactions and adjust its behavior according to the context of use, contributing to a coherent and predictable interactive experience.
DS refers to the degree of functional integration of the digital agent into a coherent interactive ecosystem, in which the user, the intelligent system, and the digital infrastructure are operationally aligned to support a fluid and coordinated experience. The respective dimension highlights the level of connectivity, interactional coherence, and harmonization between human and technological components, configuring a digital collaboration framework oriented towards efficiency and operational continuity. US reflects the global evaluation of the interaction, including the perception of the usefulness, performance, experiential quality, and functional value of the system.
The defined dimensions provide an integrated perspective on the evaluative processes involved in the interaction with BCI-inspired digital agents, allowing the analysis of the relationships between the perceived characteristics of the system and the evaluation of the user experience. The conceptual model assumes that the variables of the perceived level of innovation, user engagement, adaptive capacity, and digital synergy act as predictors of user satisfaction, analyzing both the direct effects on satisfaction and the structural relationships between the antecedents of the model and digital synergy. In this way, the research offers a coherent explanatory framework regarding the factors that influence the quality of the digital experience in human–robot interaction contexts, providing a solid methodological basis for testing the formulated hypotheses.

3.3. Selection Strategy and Participant Profile

The target population of the research was made up of active users of digital technologies familiar with interactions with virtual agents, intelligent assistants, or conversational AI systems. The selection aimed to ensure diversity in terms of age, gender, education level, and field of activity. Given the online nature of data collection, a non-probabilistic convenience sampling strategy was used, suitable for exploratory research and structural explanatory models tested through PLS-SEM. The minimum sample size was determined through a statistical power analysis conducted using the G*Power 3.1 software. The calculation was based on a multiple linear regression model with three predictor constructs corresponding to the structural paths leading to the endogenous variable. The analysis assumed a small effect size (f2 = 0.05), a significance level of α = 0.05, and a statistical power of 0.80. Under these parameters, the minimum recommended sample size was approximately 220–240 respondents. The final sample of 268 participants therefore exceeds the minimum requirement for reliable PLS-SEM estimation.
Participants were recruited through online platforms and professional communities focused on digital technology and the use of interactive AI systems, such as Prolific, and relevant academic and professional networks. These channels were selected to ensure access to respondents familiar with advanced digital interactions, without assuming direct experience with BCI devices. The questionnaire was developed and administered by the research team, being distributed online via a secure link to individuals who met the pre-established eligibility criteria: age over 18 years and previous experience in using digital agents or intelligent assistants. Participation was voluntary and anonymous, and respondents were informed in advance about the exclusively scientific purpose of the study, the confidentiality of the data, and the possibility to withdraw at any time, without consequences.
Data collection was conducted between August 2025 and December 2025 through a structured online questionnaire designed to operationalize the constructs included in the conceptual model (PDAI, DS, UED, US, DAAC). The questionnaire also included demographic items regarding age, gender, education level, and professional experience. All latent constructs were measured through reflective items rated on a five-point Likert scale (1 = “totally disagree,” 5 = “totally agree”), an approach frequently used in research on the acceptance and evaluation of digital technologies, allowing for comparable quantification of respondents’ perceptions and attitudes.
The research instrument was pretested on a pilot sample of 25 respondents, with the aim of assessing semantic clarity, wording coherence, and item adequacy at a conceptual level. Feedback analysis led to the reformulation of some ambiguous items and the elimination of two redundant statements, contributing to improving the internal consistency and readability of the instrument. Content validity and face validity were assessed with the support of three experts in the field of human–technology interaction and digital communication, who analyzed the theoretical relevance of each item and the degree of adequate coverage of the latent constructs included in the model.
By combining online recruitment from relevant professional sources, expert validation of the instrument, and the application of quality control procedures for the responses, the research ensures the robustness and credibility of the data collected. The study involved the evaluation of hypothetical digital interaction scenarios exclusively, without the collection of sensitive data, without experimental manipulations and interventions on participants. Participation was voluntary, anonymous, and based on informed consent, in accordance with generally accepted ethical principles in online behavioral research.
The demographic distribution of the 268 participants included in this study is presented in Table 1. The dataset provides an overview of respondents’ gender, age group, education level, and professional background. The distribution reflects a heterogeneous sample of digitally active individuals with varying levels of exposure to emerging technologies, providing sufficient variability for testing the structural relationships proposed in the model.
The evaluation of the measurement model aimed to verify the reliability and validity of the latent constructs included in the analysis. Internal consistency was examined by Cronbach’s Alpha coefficient and Composite Reliability (CR), and convergent validity was assessed by Average Variance Extracted (AVE). In order to identify possible collinearity problems between indicators and constructs, the Variance Inflation Factor (VIF) was analyzed, in accordance with the methodological recommendations in the PLS-SEM literature [51].
The structural model (inner model) was estimated using the PLS-SEM method, implemented by the SmartPLS 4 software. The justification for using PLS-SEM derives from explanatory structural design of the research and the complexity of the relationships between the latent variables included in the model [51]. The method is suitable for samples of moderate size and does not impose the assumption of normal data distribution. The significance of the regression coefficients was assessed using the bootstrapping procedure with 5000 re-samplings, and the explanatory power of the model was analyzed using the coefficient of determination (R2) and predictive relevance (Q2). Effect sizes (f2) were also examined to assess the relative contribution of each predictor construct to the endogenous variables.

4. Results

Following the methodological framework outlined previously, this section reports the empirical assessment of the proposed conceptual model. To ensure the internal coherence of the latent constructs and the robustness of the empirical findings, the analysis was conducted in two consecutive stages using PLS-SEM. The first stage involved the evaluation of the measurement model, focusing on reliability and validity indicators, whereas the second stage examined the structural model and tested the hypothesized relationships among constructs. This sequential analytical procedure ensures a rigorous distinction between construct validation and structural path estimation, thereby enhancing methodological transparency and inferential consistency.
The results of the measurement model evaluation are presented in Table 2. The reported indicators—including Cronbach’s alpha, CR, AVE, and VIF—demonstrate satisfactory levels of internal consistency, convergent validity, and absence of multicollinearity across all latent constructs. These findings confirm the adequacy of the measurement model and provide a robust statistical foundation for proceeding to the assessment of the structural relationships and hypothesis testing.
Among all constructs analyzed, US recorded the highest Cronbach’s Alpha value (α = 0.902), indicating excellent internal consistency in the evaluation of satisfaction associated with interaction with BCI-inspired digital agents. UED (α = 0.873) and DS (α = 0.860) also demonstrate strong internal reliability, reflecting coherent measurement of their respective dimensions. PDAI achieved α = 0.841, indicating good reliability. The lowest value was observed for DAAC (α = 0.821); however, this exceeds the recommended threshold of 0.70, confirming adequate internal consistency.
To further strengthen the reliability assessment, composite reliability (CR) was computed as a more comprehensive indicator of internal consistency, as it accounts for the actual outer loadings of indicators rather than assuming equal contribution across items [52]. All constructs exceed the recommended threshold of 0.70, with CR values ranging between 0.835 and 0.910, indicating stable and internally consistent measurement of the latent dimensions.
Among the constructs, US demonstrates the highest composite reliability (CR = 0.910), reflecting a highly coherent measurement structure that captures the evaluative dimension of digital agent interaction. PDAI (CR = 0.850), UED (CR = 0.880), and DS (CR = 0.870) also exhibit strong internal consistency. Although DAAC presents the lowest CR value (CR = 0.835), it remains well above the recommended threshold, confirming the adequacy of its operationalization within the model. Overall, the CR results reinforce the robustness of the measurement model and provide additional support for proceeding to structural analysis.
The assessment of convergent validity, evaluated through the AVE, indicates that all constructs exceed the recommended minimum threshold of 0.50, confirming adequate convergence of indicators on their respective latent dimensions. The highest AVE values were recorded for US (0.679) and UED (0.654), suggesting a strong capacity of these constructs to explain the variance of their associated items. Similarly, PDAI (0.621) and DS (0.633) demonstrate robust levels of convergent validity, reflecting coherent structural operationalization. Although DAAC presents the lowest AVE value (0.592), it remains comfortably above the theoretical threshold, supporting both convergent validity and the adequacy of construct operationalization.
Potential multicollinearity issues were examined using the Variance Inflation Factor (VIF). All reported values fall below the conservative threshold of 5.00, indicating the absence of severe collinearity and supporting the stability of the measurement model estimations [53]. The highest VIF values were observed for US7 (2.79), US5 (2.78), and PDAI5 (2.75); however, these remain within acceptable limits and do not compromise statistical independence among indicators. Conversely, lower values recorded for DAAC1 (1.65) and PDAI1 (1.85) confirm appropriate variance distribution and the absence of excessive inter-item dependency.
To assess potential common method bias (CMB), two diagnostic procedures were employed. First, Harman’s single-factor test indicated that the first unrotated factor accounted for less than 50% of the total variance, suggesting that no single factor dominated the covariance structure. Second, full collinearity VIF values were examined following [54], with all latent constructs presenting VIF values below the conservative threshold of 3.3. These results suggest that common method bias is unlikely to pose a significant threat, although it cannot be fully ruled out due to the single-source design.
Discriminant validity was assessed using the Heterotrait–Monotrait Ratio (HTMT), a stringent criterion recommended for evaluating construct distinctiveness in PLS-SEM models [51]. HTMT estimates the ratio of between-construct correlations relative to within-construct correlations, thereby providing a sensitive diagnostic of potential conceptual overlap. This procedure is particularly important in models integrating theoretically adjacent constructs, as it ensures that each latent variable captures a distinct conceptual domain within the structural framework.
The results presented in Table 3 indicate that all HTMT values remain below the conservative threshold of 0.90, supporting adequate discriminant validity across the constructs included in the model. The highest HTMT value is observed between DS and US (HTMT = 0.86). Although these constructs are theoretically related within the proposed mediation framework, the coefficient remains below the critical threshold, confirming their empirical distinctiveness. This finding aligns with prior research suggesting that coordinated digital interaction enhances perceived value and technology acceptance while remaining conceptually distinguishable from overall evaluative satisfaction [41,42]. Importantly, the magnitude of the association supports theoretical proximity without indicating construct redundancy.
Similarly, the relationship between DAAC and DS (HTMT = 0.81) indicates a substantial yet acceptable level of association. This finding suggests that perceived adaptive system behavior contributes meaningfully to the perception of coordinated digital interaction, without compromising construct independence. The result is consistent with research on adaptive AI systems, which emphasizes that personalization enhances interaction depth while remaining theoretically distinguishable from broader ecosystem integration [3,8,16,45].
Additionally, PDAI and DS present an HTMT value of 0.84, indicating a strong but acceptable conceptual proximity. This suggests that perceived technological advancement contributes to enhanced digital integration while remaining empirically distinguishable as a separate antecedent construct. The finding reinforces the interpretation of innovation as a driver of interaction quality rather than a redundant dimension of digital synergy.
In contrast, the relatively low HTMT value between UED and DAAC (HTMT = 0.33) confirms a clear conceptual distinction between engagement and perceived system adaptability. This result indicates that although adaptive features may facilitate engagement, engagement represents a broader experiential state that cannot be reduced to system learning capabilities alone.
Discriminant validity was further examined using the Fornell–Larcker criterion, which compares the square root of the average variance extracted (AVE) for each construct with the corresponding inter-construct correlations. The results presented in Table 4 indicate that this condition is satisfied for all constructs, as the diagonal values (PDAI = 0.788; UED = 0.809; DAAC = 0.770; DS = 0.796; US = 0.824) exceed the corresponding inter-construct correlations. These findings confirm both the conceptual and empirical distinctiveness of the latent dimensions included in the model.
The correlation between PDAI and US indicates a substantial positive association between perceived technological innovation and the overall evaluation of the interaction experience. Although the relationship is strong, it remains below the square root of AVE values for both constructs (PDAI = 0.788; US = 0.824), thereby confirming the preservation of discriminant validity. This result supports the hypothesis that perceived innovation contributes significantly to user satisfaction, in line with technology acceptance models [41] and the literature on adaptive AI systems [45]. The convergence of both HTMT and Fornell–Larcker criteria provides complementary evidence for construct separation within the structural model.
The coefficient of determination (R2) was used to evaluate the explanatory power of the structural model, indicating the proportion of variance in endogenous constructs explained by their predictors [51]. To avoid overestimation in the presence of multiple predictors, adjusted R2 values were also examined, providing a more conservative estimate of model performance.
The results indicate that US registers an R2 value of 0.668 (adjusted R2 = 0.661), suggesting that 66.8% of the variance in satisfaction is explained by PDAI, UED, DAAC, and DS. This represents substantial explanatory power, consistent with robust structural explanatory models in Human–Robot Interaction and Information Systems research, confirming the relevance of the constructs included in the model.
Similarly, DS exhibits an R2 value of 0.592 (adjusted R2 = 0.586), indicating that approximately 59.2% of its variance is explained by innovation, engagement, and adaptive capacity. This can be interpreted as moderate-to-substantial explanatory power, suggesting that technological and behavioral antecedents substantially contribute to the configuration of functional integration within the digital ecosystem.
Predictive relevance was assessed using the Stone–Geisser Q2 indicator, where positive values confirm the predictive capability of endogenous constructs [55,56]. According to PLS-SEM guidelines, Q2 values above 0.25 indicate moderate predictive relevance, while values above 0.50 indicate high predictive capability.
Figure 2 illustrates the estimated structural model, including standardized path coefficients and indicator loadings obtained through the PLS-SEM analysis. The diagram provides a visual representation of the structural relationships among the latent constructs and confirms the magnitude and direction of the hypothesized effects within the proposed coordination-based HRI framework.
The results show that DS records a Q2 value of 0.431, indicating solid predictive relevance of its antecedents (PDAI, UED, and DAAC). At the same time, US presents a Q2 value of 0.514, suggesting high predictive capability in explaining user experience. The moderate difference between R2 and Q2 values confirms the stability of the estimates and the robustness of the predictive structure, without indicating overfitting.
To complement the predictive assessment, global model fit was evaluated using the Standardized Root Mean Square Residual (SRMR), a widely recommended goodness-of-fit indicator in PLS-SEM models. The SRMR value obtained for the proposed model is 0.062, which is below the commonly accepted threshold of 0.08, indicating an adequate overall model fit and supporting the appropriateness of the structural specification.
To evaluate the strength and statistical significance of the hypothesized relationships, the structural model was assessed using bootstrapping with 5000 resamples. The standardized path coefficients (β), corresponding t-values, and p-values are reported in Table 5. The pattern of results confirms the structural relationships specified in the proposed coordination-based HRI model.
Effect size (f2) was calculated to assess the individual contribution of each predictor to its corresponding endogenous construct, indicating the decrease in R2 when a specific predictor is removed from the model. Based on recommended thresholds (0.02 = small, 0.15 = moderate, 0.35 = large), the results presented in Table 6 reveal meaningful variation in predictor impact.
The strongest structural effect is observed for the relationship DS → US (f2 = 0.360), indicating a large effect and confirming the central role of digital synergy in explaining user satisfaction. This finding suggests that functional integration and interactional coordination constitute the primary determinants of evaluative outcomes in BCI-inspired adaptive HRI contexts.
With regard to the antecedents of DS, the relationships DAAC → DS (f2 = 0.230) and PDAI → DS (f2 = 0.190) exhibit moderate effect sizes, indicating that both perceived adaptive capacity and perceived innovation substantially contribute to the configuration of functional integration between the user and the digital ecosystem. The relationship UED → DS (f2 = 0.140) lies at the lower boundary of the moderate threshold, suggesting that user engagement contributes to synergy formation, albeit to a lesser extent compared to adaptive capacity and innovation.
Regarding US, the direct effect of PDAI → US (f2 = 0.170) is moderate, whereas UED → US (f2 = 0.110) and DAAC → US (f2 = 0.130) indicate small-to-moderate effects. This structural pattern suggests that, although perceived innovation, engagement, and adaptive capacity exert direct influences on satisfaction, their impact becomes substantially amplified when mediated through digital synergy, which operates as the principal explanatory mechanism within the model.
To examine the mediating role of DS between the antecedent constructs (PDAI, UED, DAAC) and US, bootstrapped indirect effects were estimated using 5000 resamples. The results indicate that the indirect paths PDAI → DS → US, UED → DS → US, and DAAC → DS → US are positive and statistically significant (p < 0.05), confirming the presence of partial mediation through digital synergy.
The estimated indirect effect values are β = 0.143 for PDAI → DS → US, β = 0.117 for UED → DS → US, and β = 0.164 for DAAC → DS → US. The corresponding bootstrapped standard errors are 0.034, 0.039, and 0.041 respectively, with 95% confidence intervals that do not include zero. These results further confirm the mediating role of digital synergy in translating perceived innovation, user engagement, and adaptive capacity into overall user satisfaction.

5. Discussion

The present analysis confirms the empirical robustness of the proposed structural model and advances a coordination-based interpretative framework explaining how perceived technological attributes of BCI-inspired digital agents translate into user satisfaction. Rather than operating as isolated predictors, perceived innovation, user engagement, and adaptive capacity converge within an integrated structural configuration in which digital synergy mediates their combined influence on evaluative outcomes.
The findings indicate that satisfaction in advanced human–system interaction contexts is strongly associated with the perceived coherence of technological attributes within a unified digital ecosystem. Technological sophistication or adaptive functionality considered independently is insufficient to generate sustained evaluative approval. Satisfaction is strongly associated with perceived structural alignment between innovation, engagement, and adaptive capacity components of a coordinated interactional architecture, thereby transforming discrete technological properties into integrated experiential value.
From a systems engineering perspective, the findings suggest that adaptive HRI architectures should be evaluated not only in terms of performance metrics but also through structural coherence indicators that capture integration across user, algorithmic, and infrastructural layers.
Adaptive interaction environments rarely operate under perfectly stable conditions. In real-world HRI deployments, digital agents may encounter latency, incomplete contextual information, or incorrect interpretation of user intentions. Under such circumstances, the perceived level of digital synergy may decrease, as interactional coherence and coordination between the user and the system become disrupted. These conditions may also influence satisfaction judgments, as delayed responses or misaligned behavioral adjustments can reduce the perception of system reliability and responsiveness. Future research could therefore investigate how interactional noise, latency, or ambiguous user signals influence the stability of digital synergy and its mediating role in shaping user satisfaction.
This coordination-based mechanism extends classical technology acceptance frameworks, such as the Technology Acceptance Model (TAM) and UTAUT, in which perceived usefulness and ease of use are conceptualized as primary antecedents of satisfaction. While these constructs remain relevant in conventional interface-based systems, the present findings demonstrate that in BCI-inspired adaptive interaction environments, evaluation processes are structurally mediated. Innovation and adaptive capacity acquire explanatory relevance insofar as they contribute to systemic coherence within the interaction architecture rather than functioning as standalone determinants.
The research questions addressed in this study respond to a central issue in contemporary HRI: how evaluative structures evolve when digital agents transition from reactive interfaces to neuro-adaptive, context-sensitive collaborators embedded in distributed digital ecosystems. In interaction environments characterized by adaptive AI systems, distributed intelligence, and context-aware infrastructures, user experience can no longer be reduced to functional efficiency or usability metrics alone. Interaction quality emerges from the structural alignment among perceived technological sophistication, user engagement, adaptive recalibration, and infrastructural integration within the surrounding digital environment.
Empirically, the validated structural model demonstrates that PDAI, UED, and DAAC jointly shape DS, which in turn plays a decisive role in explaining US. The results therefore highlight the systemic character of digital collaboration in BCI-inspired adaptive HRI contexts, indicating that evaluative outcomes reflect interactional integration rather than the isolated performance of technological features.
Addressing RQ1, the study advances a conceptual clarification of the BCI-inspired adaptive agent within the HRI framework. Importantly, the agent is not conceptualized as a literal neurophysiological interface but as a cognitively aligned digital system capable of interpreting user intentions, dynamically recalibrating behavior, and sustaining contextual coherence over time. Users evaluate such agents not merely through functional outputs but through the perceived integration of adaptive behaviors within a broader digital ecosystem. BCI-inspired adaptivity is thus interpreted as an interactional property that reshapes expectations regarding responsiveness, coordination, and collaborative continuity.
In relation to RQ2, the structural results confirm that perceived innovation, user engagement, and adaptive capacity significantly contribute to the formation of digital synergy. Innovation shapes anticipatory expectations regarding systemic advancement and integration; engagement deepens cognitive and behavioral participation within the interaction process; and adaptive capacity stabilizes continuity across successive interaction episodes. Digital synergy consequently emerges as a coordination outcome generated through the convergence of technological sophistication and user participation, rather than as a superficial attribute of interface design.
Addressing RQ3, digital synergy exerts the strongest structural effect on user satisfaction. Although innovation, engagement, and adaptability display direct influences, their explanatory capacity is substantially amplified when mediated through systemic integration. Satisfaction in BCI-inspired adaptive HRI environments therefore reflects structural congruence within the interaction architecture rather than discrete technological attributes considered independently. Digital synergy operates as a coordination mechanism that aligns user intentions with system responses, integrates adaptive recalibration into a stable collaborative rhythm, and consolidates interaction into a coherent experiential structure.
From a broader theoretical standpoint, these findings indicate that satisfaction in advanced human–system collaboration contexts is best understood as a structural outcome of coordinated adaptive mechanisms. The agent becomes satisfactory not merely because it is innovative, adaptive, or engaging in isolation, but because it sustains a coherent interactional ecosystem characterized by functional alignment, contextual stability, and integrative coordination across technological and cognitive dimensions.
Adaptive robotic control systems integrating perception, decision, and execution layers can incorporate the proposed framework as a coordination-aware validation module prior to real-world deployment. In practical HRI implementations, perception modules (e.g., vision, speech recognition, and physiological sensing) feed adaptive decision layers responsible for behavioral recalibration. The digital synergy construct may serve as an evaluative coordination indicator assessing alignment between user inputs, adaptive control logic, and execution modules. Thus, the framework may function as a validation layer prior to real-world robotic deployment, supporting iterative tuning of adaptive behaviors before full system integration.
Importantly, the present study does not seek to substitute experimental human–robot interaction trials or hardware-level validation. Rather, it advances an early-stage structural evaluation framework designed to inform and guide subsequent robotic implementation and controlled HRI deployment scenarios. By modeling the perceptual and coordination mechanisms underlying adaptive interaction, the proposed framework functions as a conceptual validation layer that can precede real-world experimentation. In this sense, the model contributes to the iterative development cycle of adaptive robotic systems, supporting the systematic alignment of technological capabilities with human-centered coordination requirements prior to physical integration and empirical testing.
At the same time, the applicability of the proposed model may depend on contextual factors such as users’ technological literacy, prior experience with intelligent systems, and the specific interaction domain in which adaptive agents are deployed. These boundary conditions are discussed further in the limitations section and represent important directions for future empirical investigation.

6. Conclusions

The rapid integration of adaptive digital agents into complex socio-technical ecosystems requires structured validation frameworks capable of informing subsequent robotic implementation and deployment. Within this context, the present study contributes a structurally grounded and empirically validated coordination model explaining how BCI-inspired adaptive agents are associated with user satisfaction through systemic integration mechanisms rather than isolated technological features.
By conceptualizing BCI-inspired adaptivity as a cognitively aligned coordination architecture, rather than a literal neurophysiological interface, the research advances a refined understanding of how users evaluate adaptive digital systems. The findings demonstrate that perceived innovation, user engagement, and adaptive capacity do not operate as independent predictors of evaluative outcomes. Instead, their explanatory power converges through digital synergy—a construct formalized here as the perceived structural alignment between user intentions, algorithmic recalibration, and infrastructural integration.
Innovation contributes to satisfaction not merely by signaling technological advancement but by framing expectations of coherence and forward-oriented integration. Engagement deepens participatory alignment within the interaction process, while adaptive capacity stabilizes continuity across iterative exchanges. However, it is their coordinated integration—operationalized through digital synergy—that ultimately transforms technological sophistication into sustained experiential value.
The study thus supports a theoretical transition from feature-centric paradigms of technology acceptance toward integration-centric frameworks of adaptive digital collaboration. In contrast to conventional models emphasizing usefulness or usability as primary determinants, the present findings suggest that in BCI-inspired adaptive HRI contexts, evaluation processes are structurally mediated by the perceived coherence of the interaction ecosystem. Satisfaction is positively associated with perceived alignment, predictability, and contextual continuity across technological and cognitive dimensions.
Beyond its theoretical contributions, the research offers a scalable analytical foundation for understanding next-generation HRI techniques involving adaptive robotics, distributed AI systems, immersive virtual agents, and cognitive augmentation architectures. By empirically formalizing digital synergy as a system-level construct, the study provides a conceptual bridge between BCI-inspired design principles and collaborative digital ecosystems characterized by distributed intelligence and dynamic recalibration.
From a robotics engineering perspective, the proposed coordination-based framework can be embedded within multi-layer robotic architectures integrating perception modules, adaptive decision layers, and execution control systems. In social robots and industrial cobots, digital synergy can function as an evaluative design criterion for ensuring alignment between adaptive AI components and human-centered operational workflows. Thus, the model contributes not only to theoretical HRI discourse but also to deployment strategies in real-world robotic systems requiring dynamic human–machine coordination.
From a deployment standpoint, the framework becomes particularly relevant in concrete robotic application domains. In collaborative manufacturing environments, where cobots operate alongside human workers, digital synergy may represent a structural criterion for aligning adaptive control mechanisms with human operational rhythms and task sequencing. In healthcare robotics, neuro-adaptive coordination mechanisms could enhance patient–robot interaction coherence in assistive contexts, supporting adaptive responsiveness to cognitive and emotional states. In educational and social robotics, sustained interaction coherence enabled by adaptive recalibration may foster long-term engagement and trust formation across iterative human–robot exchanges. These scenarios illustrate the practical transferability of the proposed model across heterogeneous robotic ecosystems.
Importantly, the findings suggest that perceived advancement of BCI-inspired adaptive technologies may depend less on algorithmic complexity per se and more on architectural integration capable of sustaining coherent interactional rhythms. As digital agents increasingly operate in multi-layered environments—spanning cloud infrastructures, augmented interfaces, and autonomous decision systems—the capacity to align adaptive behavior with user intentionality becomes a defining criterion of experiential quality.
In this sense, the future trajectory of Human–Robot Interaction is likely to be shaped not solely by advances in artificial intelligence or neuro-inspired computation, but by the ability to architect coordinated ecosystems in which adaptive systems function as integrative nodes within distributed cognitive networks. The present study contributes to this trajectory by empirically examining the structural associations through which adaptive digital agents evolve from reactive tools into collaborative partners embedded in coherent digital constellations.
Ultimately, the research contributes to the ongoing conceptual development of BCI-inspired adaptive HRI as a domain in which interaction quality is not reducible to performance efficiency but must be understood as a manifestation of systemic alignment, contextual stability, and integrative coordination. Through this perspective, BCI-inspired digital agents emerge as coordination modules within adaptive robotic architectures, shaping the next generation of intelligent human–system ecosystems.

6.1. Theoretical Contributions and Conceptual Implications

The present study contributes to the theoretical consolidation of BCI-inspired adaptive HRI by formalizing a structurally mediated explanation of user satisfaction within complex digital ecosystems. Rather than conceptualizing satisfaction as a direct outcome of perceived usefulness or technological advancement, the findings support a coordination-based interpretative framework in which systemic integration functions as the primary explanatory mechanism.
First, the results extend classical acceptance models such as the TAM and UTAUT, which predominantly frame satisfaction as a function of perceived usefulness and ease of use. While these constructs remain relevant in conventional interface-based systems, the present model demonstrates that in BCI-inspired adaptive interaction contexts, evaluation processes become structurally mediated. Perceived innovation and adaptive capacity do not translate into satisfaction directly; instead, they acquire explanatory power through their contribution to digital synergy. This suggests a necessary theoretical shift from feature-centric models toward integration-centric architectures.
Second, the findings enrich DOI theory by clarifying that novelty alone does not secure evaluative endorsement in advanced AI-driven environments. Innovation influences satisfaction not merely as perceived technological advancement but as a signal of systemic coherence and forward-oriented integration. Thus, innovation becomes cognitively reframed as a precursor to coordination rather than an isolated attribute.
Third, by empirically positioning digital synergy as a mediating construct, the study aligns with Distributed Cognition and Adaptive Systems perspectives, which conceptualize intelligence as emerging from interactional networks rather than isolated agents. In this view, BCI-inspired digital agents operate as coordination nodes within broader cognitive–technological constellations. Satisfaction arises when users perceive alignment between their intentional structures and the system’s adaptive recalibration within an integrated environment.
Collectively, the study proposes a theoretical reconfiguration of neuro-adaptive HRI: interaction quality is not reducible to performance metrics but must be understood as a structural outcome of coordinated adaptive mechanisms. This mediation-based architecture contributes to the formalization of digital synergy as a theoretically distinct construct within advanced human–system collaboration research.

6.2. Implications for System Architecture and Interaction Design

The structural findings carry important implications for the design and engineering of BCI-inspired adaptive digital agents embedded in distributed ecosystems. First, design strategies should prioritize coordination architecture over isolated performance optimization. Enhancing innovation or adaptive capabilities in isolation is insufficient to maximize user satisfaction. Instead, engineering efforts should focus on ensuring interoperability, temporal continuity, and procedural alignment across interaction layers. System components must function as structurally integrated modules rather than as independent intelligent features.
Second, adaptive capacity should be implemented through mechanisms that preserve perceptual coherence. Users interpret system recalibration as alignment when adaptive behaviors demonstrate contextual consistency and predictable evolution across interaction episodes. This implies that adaptive algorithms should incorporate continuity modeling, memory integration, and transparent recalibration pathways rather than episodic reactive adjustments.
Third, digital agents inspired by BCI principles should be conceptualized not as standalone AI entities but as embedded coordination mechanisms within broader infrastructural ecologies. Engineering architectures must therefore support data synchronization, cross-platform harmonization, and stable interaction rhythms to sustain perceived synergy. The experiential value of BCI-inspired adaptive systems depends less on algorithmic complexity and more on how seamlessly adaptive processes integrate into the user’s operational environment.
This entails the implementation of modular interoperability layers that enable seamless coordination across heterogeneous system components and prevent functional fragmentation within the interaction ecosystem. Architectures should integrate context-memory buffers capable of preserving interaction histories across sessions, thereby supporting continuity, anticipation of user intentions, and sustained behavioral coherence over time. In parallel, adaptive feedback loops must be embedded to enable continuous recalibration of system parameters in response to evolving user patterns, functioning as integrative alignment mechanisms rather than isolated reactive adjustments.
Furthermore, system design should incorporate multi-layer synchronization mechanisms that ensure temporal and procedural alignment between user input streams, algorithmic processing modules, and infrastructural data flows. Such coordinated synchronization transforms technical responsiveness into systemic coherence, enabling adaptive agents to sustain stable, contextually congruent interaction rhythms across complex digital environments. The findings indicate that experiential satisfaction in advanced HRI contexts is optimized when technological sophistication is subordinated to systemic coherence. Design maturity is achieved not through feature accumulation but through architectural integration.
The results suggest several implementation directions for the design of adaptive HRI systems. First, developers of intelligent agents should prioritize mechanisms that enable real-time behavioral adjustment based on user interaction patterns, allowing the system to dynamically recalibrate responses and maintain interaction continuity. Second, designers of collaborative robotic platforms and virtual assistants should focus on improving the systemic integration of digital agents within broader digital ecosystems, ensuring alignment between user actions, system responses, and infrastructural processes. Third, the findings indicate that enhancing user engagement through transparent feedback mechanisms and adaptive interaction cues can significantly strengthen perceived digital synergy and ultimately improve user satisfaction. These insights provide actionable guidance for the development of adaptive digital agents capable of supporting coordinated human–system interaction in complex digital environments.

6.3. Limitations and Future Research

Despite its theoretical and empirical contributions, the study presents several limitations that delineate avenues for further investigation. While the use of convenience sampling limits statistical generalizability, the predictive orientation of PLS-SEM prioritizes the examination of structural relationships over population-level inference, thereby aligning the sampling strategy with the explanatory objectives of the study. Although the sample includes participants from diverse technological backgrounds such as artificial intelligence, human–computer interaction, and software development, the use of convenience sampling may still limit the generalizability of the findings. The distribution indicates a population with relatively high exposure to digital technologies, which may not fully represent the broader population of potential HRI users. Future research could test the proposed model using more heterogeneous samples and domain-specific user groups.
First, the cross-sectional research design restricts causal inference and does not capture the temporal evolution of digital synergy. Future longitudinal research could examine how repeated interaction cycles influence the stabilization of systemic coherence and whether synergy strengthens or attenuates over extended engagement periods.
Second, the operationalization of BCI-inspired adaptivity relied on perceptual assessments derived from scenario-based exposure rather than real-time physiological or behavioral data. Although this approach ensured conceptual control, experimental studies incorporating live interaction environments, biometric feedback, or behavioral performance metrics could refine the explanatory precision of the mediation mechanism. Future research could integrate behavioral trace data or experimental prototypes to complement perceptual modeling and increase ecological validity.
Third, the model was tested within a specific interactional framing inspired by BCI architectures. Future research may explore boundary conditions by examining how technological literacy, trust in AI, cultural factors, or domain specificity (e.g., healthcare robotics, educational agents, immersive collaboration platforms) moderate the structural relationships identified here.
Fourth, while digital synergy was conceptualized as a coordination construct, its micro-dynamics remain theoretically open. Further work could decompose synergy into subdimensions—such as temporal synchronization, semantic alignment, and infrastructural interoperability—to deepen construct granularity and measurement refinement.

Author Contributions

Conceptualization, I.O. and I.P.; methodology, I.O.; software, I.O., I.P., D.J., G.S.B. and C.-M.P.; validation, I.O., I.P., D.J., G.S.B. and C.-M.P.; formal analysis, I.O., I.P., D.J., G.S.B. and C.-M.P.; investigation, I.O., I.P., D.J., G.S.B. and C.-M.P.; resources, I.O., I.P. and C.-M.P.; data curation, I.O.; writing—original draft preparation, I.O. and C.-M.P.; writing—review and editing, I.O.; visualization, I.O., I.P. and D.J.; supervision, I.O. and I.P.; project administration, I.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the internal research projects competition of Titu Maiorescu University (UTM) under the projects PCI-183–RobColab–Development of a collaborative robot system for medical applications and PCI-184–GENPROMED–Prompt Engineering applied in AI-assisted biochemical analysis.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HRIHuman–Robot Interaction
HCIHuman–Computer Interaction
BCIBrain–Computer Interface
AIArtificial Intelligence
PDAIPerceived Digital Agent Innovation
UEDUser Engagement
DAACDigital Agent Adaptive Capacity
DSDigital Synergy
USUser Satisfaction
PLS-SEMPartial Least Squares Structural Equation Modeling
TAMTechnology Acceptance Model
UTAUTUnified Theory of Acceptance and Use of Technology
DOIDiffusion of Innovation
CRComposite Reliability
AVEAverage Variance Extracted
VIFVariance Inflation Factor
HTMTHeterotrait–Monotrait Ratio
CMBCommon Method Bias
CB-SEMCovariance-Based Structural Equation Modeling
G*PowerStatistical Power Analysis Program G*Power
R2Coefficient of Determination
Q2Stone–Geisser Predictive Relevance
f2Effect Size
SRMRStandardized Root Mean Square Residual

References

  1. Adawy, M.; Abualese, H.; El-Omari, N.K.T.; Alawadhi, A. Human–Robot Interaction (HRI) using Machine Learning (ML): A Survey and Taxonomy. Int. J. Adv. Soft Comput. Its Appl. 2024, 16, 183–213. [Google Scholar] [CrossRef]
  2. Dafarra, S.; Pattacini, U.; Romualdi, G.; Rapetti, L.; Grieco, R.; Darvish, K.; Milani, G.; Valli, E.; Sorrentino, I.; Viceconte, P.M.; et al. icub3 Avatar System: Enabling Remote Fully Immersive Embodiment of Humanoid Robots. Sci. Robot. 2024, 9, eadh3834. [Google Scholar] [CrossRef]
  3. Di Tecco, A.; Leonardis, D.; Ragusa, E.; Frisoli, A.; Loconsole, C. Bio-Adaptive Robot Control: Integrating Biometric Feedback and Gesture-Based Interfaces for Intuitive Human–Robot Interaction (HRI). Robotics 2026, 15, 45. [Google Scholar] [CrossRef]
  4. Rodriguez-Guerra, D.; Sorrosal, G.; Cabanes, I.; Calleja, C. Human-robot interaction review: Challenges and solutions for modern industrial environments. IEEE Access 2021, 9, 108557–108578. [Google Scholar] [CrossRef]
  5. Jahanmahin, R.; Masoud, S.; Rickli, J.; Djuric, A. Human-robot interactions in manufacturing: A survey of human behavior modeling. Robot. Comput.-Integr. Manuf. 2022, 78, 102404. [Google Scholar] [CrossRef]
  6. Wang, Y.; Ge, M.; Xu, S. Advances in Brain–Computer Interfaces (BCI): Challenges and Opportunities. Biomimetics 2026, 11, 157. [Google Scholar] [CrossRef]
  7. Zhang, H.; Jiao, L.; Yang, S.; Li, H.; Jiang, X.; Feng, J.; Zou, S.; Xu, Q.; Gu, J.; Wang, X.; et al. Brain–Computer Interfaces: The Innovative Key to Unlocking Neurological Conditions. Int. J. Surg. 2024, 110, 5745–5762. [Google Scholar] [CrossRef]
  8. Goodrich, M.A.; Schultz, A.C. Human–robot interaction: A survey. Found. Trends Hum.-Comput. Interact. 2008, 1, 203–275. [Google Scholar] [CrossRef]
  9. Podder, K.K.; Dutta, P.; Zhang, J. Replay-Based Domain Incremental Learning for Cross-User Gesture Recognition in Robot Task Allocation. Electronics 2025, 14, 3946. [Google Scholar] [CrossRef]
  10. Taulli, T.; Deshmukh, G. CrewAI. In Building Generative AI Agents: Using LangGraph, AutoGen, and CrewAI; Apress: New York, NY, USA, 2025; pp. 99–114. [Google Scholar]
  11. Sun, J.; Kou, J.; Shi, W.; Hou, W. A multi-agent collaborative algorithm for task-oriented dialogue systems. Int. J. Mach. Learn. Cybern. 2025, 16, 2009–2022. [Google Scholar] [CrossRef]
  12. Roveda, L.; Maskani, J.; Franceschi, P.; Abdi, A.; Braghin, F.; Molinari Tosatti, L.; Pedrocchi, N. Model-based reinforcement learning variable impedance control for human-robot collaboration. J. Intell. Robot. Syst. 2020, 100, 417–433. [Google Scholar] [CrossRef]
  13. Lu, W.; Hu, Z.; Pan, J. Human-robot collaboration using variable admittance control and human intention prediction. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE); IEEE: New York, NY, USA, 2020; pp. 1116–1121. [Google Scholar]
  14. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
  15. Sharkawy, A.N.; Koustoumpardis, P.N. Human-robot interaction: A review and analysis on variable admittance control, safety, and perspectives. Machines 2022, 10, 591. [Google Scholar] [CrossRef]
  16. Stone, P.; Veloso, M. Multiagent systems: A survey from a machine learning perspective. Auton. Robot. 2000, 8, 345–383. [Google Scholar] [CrossRef]
  17. Zahabi, M.; Abdul Razak, A.M. Adaptive virtual reality-based training: A systematic literature review and framework. Virtual Real. 2020, 24, 725–752. [Google Scholar] [CrossRef]
  18. Sharifi, M.; Zakerimanesh, A.; Mehr, J.K.; Torabi, A.; Mushahwar, V.K.; Tavakoli, M. Impedance variation and learning strategies in human-robot interaction. IEEE Trans. Cybern. 2021, 52, 6462–6475. [Google Scholar] [CrossRef]
  19. Hoffman, G.; Bhattacharjee, T.; Nikolaidis, S. Inferring human intent and predicting human action in human-robot collaboration. Annu. Rev. Control Robot. Auton. Syst. 2024, 7, 73–95. [Google Scholar] [CrossRef]
  20. Bouzón, I.; Pascual, J.; Costales, C.; Crespo, A.; Cima, C.; Melendi, D. Design, Implementation and Evaluation of an Immersive Teleoperation Interface for Human-Centered Autonomous Driving. Sensors 2025, 25, 4679. [Google Scholar] [CrossRef] [PubMed]
  21. Loizaga, E.; Eyam, A.T.; Bastida, L.; Lastra, J.I.M. Comprehensive Study of Human Factors, Sensory Principles, and Commercial Solutions for Future Human-Centered Working Operations in Industry 5.0. IEEE Access 2023, 11, 53806–53829. [Google Scholar] [CrossRef]
  22. Liu, Y.; Sun, Q.; Kapadia, D.R. Integrating Large Language Models into Robotic Autonomy: A Review of Motion, Voice, and Training Pipelines. AI 2025, 6, 158. [Google Scholar] [CrossRef]
  23. He, B.; Liu, C.; Qi, Z.; Xue, N.; Yao, L. NeuroGator: A Low-Power Gating System for Asynchronous BCI Based on LFP Brain State Estimation. Brain Sci. 2026, 16, 141. [Google Scholar] [CrossRef] [PubMed]
  24. Hassouna, A.B.; Chaari, H.; Belhaj, I. LLM-Agent-UMF: LLM-based Agent Unified Modeling Framework for Seamless Design of Multi Active/Passive Core-Agent Architectures. Inf. Fusion 2026, 127, 103865. [Google Scholar] [CrossRef]
  25. Brohi, S.; Mastoi, Q.; Jhanjhi, N.Z.; Pillai, T.R. A Research Landscape of Agentic AI and Large Language Models: Applications, Challenges and Future Directions. Algorithms 2025, 18, 499. [Google Scholar] [CrossRef]
  26. Baqapuri, H.I.; Roes, L.D.; Zvyagintsev, M.; Ramadan, S.; Kell, M.; Roecher, E.; Zweerings, J.; Klasen, M.; Gur, R.C.; Mathiak, K. A Novel Brain–Computer Interface Virtual Environment for Neurofeedback During Functional MRI. Front. Neurosci. 2021, 14, 593854. [Google Scholar] [CrossRef]
  27. Gao, X.; Wang, Y.; Chen, X.; Gao, S. Interface, Interaction, and Intelligence in Generalized Brain–Computer Interfaces. Trends Cogn. Sci. 2021, 25, 671–684. [Google Scholar] [CrossRef]
  28. Piszcz, A.; Rojek, I.; Mikołajewski, D. Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge. Appl. Sci. 2024, 14, 10541. [Google Scholar] [CrossRef]
  29. Leeb, R.; Pérez-Marcos, D. Brain-computer interfaces and virtual reality for neurorehabilitation. Handb. Clin. Neurol. 2020, 168, 183–197. [Google Scholar] [CrossRef]
  30. Fouad, M.M.; Amin, K.M.; El-Bendary, N.; Hassanien, A.E. Brain computer interface: A review. In Brain-Computer Interfaces: Current Trends and Applications; Springer: Berlin/Heidelberg, Germany, 2015; Volume 74, pp. 3–30. [Google Scholar]
  31. Rogers, E.M. Diffusion of Innovations, 5th ed.; Free Press: New York, NY, USA, 2003. [Google Scholar]
  32. Blau, P.M. Exchange and Power in Social Life; Wiley: New York, NY, USA, 1964. [Google Scholar]
  33. Mishra, S.; Priyanka, B. A Survey on Brain-Computer Interaction. arXiv 2022, arXiv:2201.00997v3. [Google Scholar] [CrossRef]
  34. Elashmawi, W.H.; Ayman, A.; Antoun, M.; Mohamed, H.; Mohamed, S.E.; Amr, H.; Talaat, Y.; Ali, A. A Comprehensive Review on Brain—Computer Interface (BCI)-Based Machine and Deep Learning Algorithms for Stroke Rehabilitation. Appl. Sci. 2024, 14, 6347. [Google Scholar] [CrossRef]
  35. Chen, D.; Liu, K.; Guo, J.; Bi, L.; Xiang, J. Editorial: Brain-computer interface and its applications. Front. Neurorobot. 2023, 17, 1140508. [Google Scholar] [CrossRef] [PubMed]
  36. Davis, K.R. Brain-Computer Interfaces: The Technology of Our Future. UC Merced Undergrad. Res. J. 2022, 14, 1–28. [Google Scholar] [CrossRef]
  37. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  38. Malone, T.W. How human–computer “superminds” are redefining the future of work. MIT Sloan Manag. Rev. 2018, 59, 34–41. [Google Scholar]
  39. Kim, S.; Lee, S.; Kang, H.; Kim, S.; Ahn, M. P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors 2021, 21, 5765. [Google Scholar] [CrossRef]
  40. Beshdeleh, M.; Angel, A.; Bolour, L. Adoption of EBET Agency’s Cloud Casino Software by Using TOE and DOI Theory as a Solution for Gambling Website. J. Innov. Bus. Res. 2020, 116, 100–119. [Google Scholar]
  41. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  42. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  43. Montañez-Jacquez, S.; Clippinger, J.H.; Moroney, M. Agentic Finance: An Adaptive Inference Framework for Bounded-Rational Investing Agents. Entropy 2026, 28, 321. [Google Scholar] [CrossRef]
  44. Amin, M.; Rezaei, S.; Abolghasemi, M. User satisfaction with mobile websites: The impact of perceived usefulness (PU), perceived ease of use (PEOU) and trust. Nankai Bus. Rev. Int. 2014, 5, 258–274. [Google Scholar] [CrossRef]
  45. Huang, M.-H.; Rust, R.T. A strategic framework for artificial intelligence in marketing. J. Acad. Mark. Sci. 2021, 49, 30–50. [Google Scholar] [CrossRef]
  46. Peksa, J.; Mamchur, D. State-of-the-Art on Brain-Computer Interface Technology. Sensors 2023, 23, 6001. [Google Scholar] [CrossRef] [PubMed]
  47. Yuste, R.; Goering, S.; Agüera y Arcas, B.; Bi, G.; Carmena, J.M.; Carter, A.; Fins, J.J.; Fries, P.; Illes, J.; Kellmeyer, P.; et al. Ethical Issues in Brain—Computer Interface Research. Nature 2017, 551, 159–162. [Google Scholar] [CrossRef] [PubMed]
  48. Grumbach, S. Cognitive Assemblages: Living with Algorithms. Big Data Cogn. Comput. 2026, 10, 63. [Google Scholar] [CrossRef]
  49. Venkatesh, V.; Davis, F.D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef]
  50. Sharkawy, A.N.; Koustoumpardis, P.N.; Aspragathos, N. A recurrent neural network for variable admittance control in human-robot cooperation: Simultaneously and online adjustment of the virtual damping and Inertia parameters. Int. J. Intell. Robot. Appl. 2020, 4, 441–464. [Google Scholar] [CrossRef]
  51. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  52. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  53. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Press: New York, NY, USA, 2015. [Google Scholar]
  54. Kock, N. Common method bias in PLS-SEM: A full collinearity assessment approach. Int. J. e-Collab. 2015, 11, 1–10. [Google Scholar] [CrossRef]
  55. Geisser, S. A predictive approach to the random effect model. Biometrika 1974, 61, 101–107. [Google Scholar] [CrossRef]
  56. Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B 1974, 36, 111–147. [Google Scholar] [CrossRef]
Figure 1. Proposed research model.
Figure 1. Proposed research model.
Electronics 15 01206 g001
Figure 2. Structural model with standardized path coefficients and indicator loadings.
Figure 2. Structural model with standardized path coefficients and indicator loadings.
Electronics 15 01206 g002
Table 1. Demographic characteristics of the respondents.
Table 1. Demographic characteristics of the respondents.
Demographic VariableFrequencyPercentage (%)
Gender
Male15658.20
Female11241.80
Total268100
Age Group
18–24 years4817.90
25–34 years9635.80
35–44 years7929.50
45 years and above4516.80
Total268100
Education Level
High School186.70
Bachelor’s Degree10438.80
Master’s Degree10238.10
Doctorate4416.40
Total268100
Field of Expertise
Human–Computer Interaction (HCI)4014.90
Artificial Intelligence (AI)5420.10
Neuroscience2810.40
Emerging Neurotechnologies/BCI-related Fields249.00
Virtual & Augmented Reality3211.90
Information Technology & Software Development4817.90
Business, Digital Marketing & Management2810.40
Other related fields145.20
Total268100
Table 2. Construct reliability.
Table 2. Construct reliability.
VariableItemsOuter LoadingVIFCronbach’s AlphaCRAVE
Perceived Digital Agent Innovation (PDAI) PDAI1—The digital agent offers a different experience compared to other similar technologies.0.7061.850.8410.8500.621
PDAI2—The way the digital agent reacts to signals associated with the user’s intent is perceived as natural and sophisticated.0.8242.02
PDAI3—The BCI-inspired interface improves the way I communicate with the digital agent.0.8062.14
PDAI4—The digital agent integrates advanced technologies in an original way.0.8502.42
PDAI5—The interaction with the digital agent demonstrates a high degree of technological innovation.0.7262.75
User Engagement (UED)UED1—Attention remains fully focused during interaction with the digital agent.0.8071.920.8730.8800.654
UED2—The system sustains interest throughout the interaction experience.0.7552.15
UED3—Engagement with the agent stimulates further exploration of the digital environment.0.8992.36
UED4—Active involvement characterizes the interaction process with the digital agent.0.7662.48
Perceived Digital Agent Adaptive Capacity (DAAC)DAAC1—The digital agent adjusts its behavior in response to user reactions.0.8351.650.8210.8350.592
DAAC2—The digital agent demonstrates learning across previous interaction episodes.0.8051.89
DAAC3—The precision of the agent’s responses improves with repeated use.0.7032.05
DAAC4—The agent’s response patterns vary according to contextual conditions of use.0.7092.22
Digital Synergy (DS)DS1—The digital agent is functionally integrated within the operational flow of the digital ecosystem.0.7911.980.8600.8700.633
DS2—The agent’s responses are procedurally aligned with user actions.0.8852.11
DS3—The agent operates coherently within the broader digital environment.0.7792.31
DS4—The agent supports continuous coordination between user activity and digital infrastructure.0.7132.47
User Satisfaction (US)US1—The interaction with the digital agent is perceived as natural and enjoyable.0.7062.620.9020.9100.679
US2—The agent’s responses correspond well to my expectations.0.9402.41
US3—The agent’s performance meets the practical requirements of my interaction tasks.0.7012.53
US4—The experience provided by the digital agent is evaluated as high quality.0.7052.66
US5—The use of a similar digital agent would be recommended to others.0.9282.78
US6—Use of the digital agent provides an intuitive and efficient experience.0.9302.48
US7—The overall interaction experience generates sustained satisfaction.0.8102.79
Table 3. Heterotrait–Monotrait ratio (HTMT).
Table 3. Heterotrait–Monotrait ratio (HTMT).
PDAIUEDDAACDSUS
PDAI1.000
UED0.7401.000
DAAC0.6900.3301.000
DS0.8400.5700.8101.000
US0.7900.6500.7600.8601.000
Table 4. Fornell–Larcker criterion.
Table 4. Fornell–Larcker criterion.
PDAIUEDDAACDSUS
PDAI0.788
UED0.6300.809
DAAC0.4700.2600.770
DS0.6800.5300.6400.796
US0.7200.6900.5800.7400.824
Table 5. Structural path coefficients and hypothesis testing.
Table 5. Structural path coefficients and hypothesis testing.
Relationshipβt-Valuep-Value95% CI (LL)95% CI (UL)Result
PDAI → DS0.2944.213<0.0010.1580.413Supported
UED → DS0.2413.587<0.0010.1050.368Supported
DAAC → DS0.3385.021<0.0010.2080.462Supported
PDAI → US0.3014.115<0.0010.1560.435Supported
UED → US0.2122.9840.0030.0700.347Supported
DAAC → US0.2573.441<0.0010.1180.392Supported
DS → US0.4866.732<0.0010.3460.612Supported
Table 6. The size effect (f2).
Table 6. The size effect (f2).
Relationshipf2Interpretation
PDAI → DS0.190moderate
UED → DS0.140small-to-moderate
DAAC → DS0.230moderate
PDAI → US0.170moderate
UED → US0.110small
DAAC → US0.130small-to-moderate
DS → US0.360large
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oncioiu, I.; Priescu, I.; Joița, D.; Banu, G.S.; Priescu, C.-M. BCI-Inspired Adaptive Agents in Human–Robot Interaction: A Structural Framework for Coordinated Interaction Design. Electronics 2026, 15, 1206. https://doi.org/10.3390/electronics15061206

AMA Style

Oncioiu I, Priescu I, Joița D, Banu GS, Priescu C-M. BCI-Inspired Adaptive Agents in Human–Robot Interaction: A Structural Framework for Coordinated Interaction Design. Electronics. 2026; 15(6):1206. https://doi.org/10.3390/electronics15061206

Chicago/Turabian Style

Oncioiu, Ionica, Iustin Priescu, Daniela Joița, Geanina Silviana Banu, and Cătălina-Mihaela Priescu. 2026. "BCI-Inspired Adaptive Agents in Human–Robot Interaction: A Structural Framework for Coordinated Interaction Design" Electronics 15, no. 6: 1206. https://doi.org/10.3390/electronics15061206

APA Style

Oncioiu, I., Priescu, I., Joița, D., Banu, G. S., & Priescu, C.-M. (2026). BCI-Inspired Adaptive Agents in Human–Robot Interaction: A Structural Framework for Coordinated Interaction Design. Electronics, 15(6), 1206. https://doi.org/10.3390/electronics15061206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop