An Interaction–Engagement–Intention Model: How Artificial Intelligence and Augmented Reality Transform the User–Platform Interaction Paradigm
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis manuscript offers a timely and relevant investigation into how AI and AR technologies influence user behaviour on mobile platforms, particularly within the e-commerce context. The development of the Interaction-Engagement-Intention (I-E-I) model, underpinned by the S-O-R framework, provides a solid theoretical and empirical contribution. The study is well-articulated and methodologically robust, with clear research questions and a comprehensive model structure. I appreciate the depth of the analysis and the effort to integrate both cognitive and emotional dimensions of user experience.
That said, a few areas would benefit from refinement before the manuscript is ready for publication. The following minor revisions are suggested to improve clarity, presentation, and overall impact.
- Ensure that all references and in text citations are formatted according to the journal’s guidelines.
- While the I-E-I model is thoughtfully constructed, the manuscript would benefit from a clearer articulation of how it extends or differs from prior models (e.g., TAM, UTAUT, or CAC frameworks). The current discussion references these bodies of work, but the novelty could be stated more explicitly. i.e. a short paragraph comparing the I-E-I model against well-established models, perhaps in the introduction or model development section, would help position the contribution more strongly.
- The IKEA app is used as the primary platform for data collection, which is acceptable. However, the manuscript could better manage reader expectations regarding generalizability by briefly noting this limitation earlier either in the abstract or in the introduction.
- The data collection process through Qualtrics and social media platforms in Australia is well-described. The sample size (N=880) is more than adequate for PLS-SEM and reflects good response engagement (78%). For example, the choice of IKEA as a test platform is logical, but the rationale should be explained more explicitly. Why was IKEA selected? Is it due to its wide user base, existing AR functionality, or user familiarity?
- The use of PLS-SEM is methodologically appropriate, particularly given the model’s complexity and exploratory nature. The authors also correctly follow the two-step approach of evaluating the measurement model and then the structural model. However, the explanation of why PLS-SEM was chosen could be expanded. Currently, it’s implied that model complexity justifies it, but this reasoning could be more explicit.
- Figure.1 is helpful in visualizing the model, but it could be made more intuitive. At present, the roles of the constructs (stimuli, organism, and response) are not visually separated. If possible then you may differentiate the constructs using coloured blocks or labelled groupings (e.g., “Stimuli,” “Organism,” “Response”) to align more clearly with the S-O-R framework.
- The manuscript offers a solid foundation in terms of measurement item selection and construct design. The adaptation of validated items from prior literature (e.g., Sun et al., 2022; Yin & Qiu, 2021) is appreciated and strengthens the study’s construct validity. However, a few clarifications could improve transparency, i.e. Appendix A is comprehensive, it would help if the main text more clearly mapped each construct to its source study. For example, the descriptions of “Attitude” and “Trust” cite multiple references but lack a clear rationale for how the exact items were chosen.
- The statistical results are reported comprehensively. However, the interpretation could be strengthened, especially for key paths like Trust → Continuance Intention and Subjective Norm → Trust, which show relatively strong effects. You can add a brief narrative in the discussion section that interprets the comparative strength of these paths would add more depth and practical relevance.
- R² values for the dependent variables fall in the 0.16–0.24 range, which is moderate. This is acceptable in social science contexts, but the authors should briefly acknowledge this and justify its adequacy, possibly with references that support R² thresholds in behavioural research.
- H11 (Trust → Continuance Intention) shows the strongest influence (t = 8.19), but the narrative treats it similarly to weaker paths. In contrast, H2 (Product Fit → Spatial Presence) has the smallest coefficient, and while statistically significant, its practical weight may be modest. Consider ranking or thematically grouping the strongest vs. weaker relationships to highlight what matters most in practice. You might add a sentence or two on what this implies for UX designers or marketers.
- The writing is generally clear, though some sections (particularly in the introduction and discussion) would benefit from minor rephrasing to improve flow and reduce repetition. For example: Instead of “Users engage with mobile platforms through multi-modal interactions, where interactivity can provide simulated information...” you may consider: “Users engage with mobile platforms through multi-modal interactions that simulate tailored information to meet their needs...”
- While the discussion section is comprehensive, it occasionally repeats earlier content from the results or theoretical background sections. Consider tightening the first few paragraphs in Section 6 to focus more on “what these findings mean” rather than “what was tested.”
Author Response
Response to Reviewer 1 Comments
|
||
1. Summary |
|
|
Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files.
|
||
2. Questions for General Evaluation |
Reviewer’s Evaluation |
Response and Revisions |
Does the introduction provide sufficient background and include all relevant references? |
Can be improved |
We have revisited the introduction section with sufficient background, page 2 (as highlighted in the manuscript) |
Are all the cited references relevant to the research? |
Yes |
|
Is the research design appropriate? |
Can be improved |
We have revisited the research design in detail in the methodology page 11 -12 (as highlighted in the manuscript) |
Are the methods adequately described? |
Can be improved |
We have reviewed the research methods in detail in the methodology, page 11 (as highlighted in the manuscript) |
Are the results clearly presented? |
Can be improved |
We have revisited the result section with clear indications, page 14 - 16 (as highlighted in the manuscript) |
Are the conclusions supported by the results? |
Yes |
|
Are all figures and tables clear and well presented? |
Can be improved |
We have improved the quality of figures and tables to visualize more, page 11 (as highlighted in the manuscript) |
3. Point-by-point response to Comments and Suggestions for Authors |
||
Comments 1: Ensure that all references and in text citations are formatted according to the journal’s guidelines. |
||
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have revisited all the references and in text citations according to Journal’s guideline (Throughout the paper)
|
||
Comments 2: While the I-E-I model is thoughtfully constructed, the manuscript would benefit from a clearer articulation of how it extends or differs from prior models (e.g., TAM, UTAUT, or CAC frameworks). The current discussion references these bodies of work, but the novelty could be stated more explicitly. i.e. a short paragraph comparing the I-E-I model against well-established models, perhaps in the introduction or model development section, would help position the contribution more strongly.
|
||
Response 2: Agree. We have, accordingly, described how the I-E-I differs from the existing models to emphasize this point. We have added a paragraph on the page 9 of the manuscript as below:
In the existing literature, authors explained different models like the expectation confirmation theory (ECT) define the user confirmation through the fulfilment of their expectation that influences behavioural intentions [42]. Further, Goel et al. [27] described the consequences of visual, arousal, haptic stimuli cues through emotional states, following the stimulus-organism-response (SOR) theory, and indicated the positive outcomes toward impulsive buying intentions. Whereas, this study explains the development of users’ cognitive and emotional engagements through interaction, innovative, and immersive stimuli cues like interactivity, product fit, AI-driven recommendation, and online review that positively engage users’ attitude and trust development towards continuance intentions [45]. This study uniquely develops an interaction-engagement-intention model, following a SOR framework to explain the user platform interactions and their influences in perceiving user experience through cognitive and emotional involvements, particularly in the context of an AR environment that accommodates AI.
Comments 3: The IKEA app is used as the primary platform for data collection, which is acceptable. However, the manuscript could better manage reader expectations regarding generalizability by briefly noting this limitation earlier either in the abstract or in the introduction. Response 3: Agree. We have mentioned in the introduction section to emphasize this point. We have added a paragraph on the page 3 of the manuscript as below:
This study executed a descriptive quantitative method using a square least square structural equation model (PLS-SEM). An online quantitative survey was designed to collect quantitative data from online users through a convenience snowball sampling technique. The study limits the generalizability of applying IKEA as an AR mobile platform within Australia to investigate the influences of AR on users’ perceived cognitive and emotional engagements.
Comments 4: The data collection process through Qualtrics and social media platforms in Australia is well-described. The sample size (N=880) is more than adequate for PLS-SEM and reflects good response engagement (78%). For example, the choice of IKEA as a test platform is logical, but the rationale should be explained more explicitly. Why was IKEA selected? Is it due to its wide user base, existing AR functionality, or user familiarity?
Response 4: Agree. We have explained in detail the reasons for choosing IKEA as an extreme research instrument in the method & material section to emphasize this point. We have added on the page 12 of the manuscript as below:
In the study, IKEA was chosen as an augmented reality (AR) platform that accommodates an AR mobile app (IKEA app), accommodating AI for online users to place furniture virtually in their location. IKEA, one of the pioneers in retail industry, has incorporated AR as an immersive technology to allow users’ preferences through virtual interactions in real-world scenario. The IKEA mobile app is designed to accommodate a true-to-scale model that permits them to visualize the products’ 'texture, light, and contrast. Further, the IKEA AR platform supports users to avail a one-stop service by placing virtual products in a preferred space. This IKEA app was selected as an extreme research instrument to investigate the study with an AR environment. The major motives for using IKEA are (a) IKEA accommodates users to interact both mobile app and responsive website with AR functionalities, (b) IKEA introduced product fit and scaling features using AR into their mobile platform, (c) IKEA supports AR features in Australia, and (d) IKEA acquired a large number of retail users with versatile age groups in Australia.
Comments 5: The use of PLS-SEM is methodologically appropriate, particularly given the model’s complexity and exploratory nature. The authors also correctly follow the two-step approach of evaluating the measurement model and then the structural model. However, the explanation of why PLS-SEM was chosen could be expanded. Currently, it’s implied that model complexity justifies it, but this reasoning could be more explicit. Response 5: Agree. We have included an explanation in the data analysis section to emphasize this point. We have added a paragraph on the page 14 of the manuscript as below:
The PLS-SEM was applied as a second-generation variance-based data analytical technique in analysing the survey-based quantitative data [9]. This study executed a PLS-SEM to support the data analytical process because of exploratory nature of research through predictive outcomes, functional robustness, non-normality, accommodate formative and reflective constructs, complex model validation, and support more hypotheses. In the PLS-SEM, a two-stage approach was applied through measurement model as an outer model assessment and structural model as inner model evaluations to validate the proposed I-E-I model.
Comments 6: Figure.1 is helpful in visualizing the model, but it could be made more intuitive. At present, the roles of the constructs (stimuli, organism, and response) are not visually separated. If possible then you may differentiate the constructs using coloured blocks or labelled groupings (e.g., “Stimuli,” “Organism,” “Response”) to align more clearly with the S-O-R framework. Response 6: Agree. We have revisited the figure and re-designed the figure to differentiate the construct using different colors in the hypothesis development and research model section to emphasize this point. We have re-designed the figure 1 on the page 11 of the manuscript.
Comments 7: The manuscript offers a solid foundation in terms of measurement item selection and construct design. The adaptation of validated items from prior literature (e.g., Sun et al., 2022; Yin & Qiu, 2021) is appreciated and strengthens the study’s construct validity. However, a few clarifications could improve transparency, i.e. Appendix A is comprehensive, it would help if the main text more clearly mapped each construct to its source study. For example, the descriptions of “Attitude” and “Trust” cite multiple references but lack a clear rationale for how the exact items were chosen. Response 7: Agree. We have described this point in the methods and materials section to emphasize this point. We have added a paragraph on the page 12 of the manuscript as below:
Few items were adjusted in wording to good fit for the AR-based research context. For instance, the items “I could no longer doubt that the product would fit my desired spaces”, “With the support of AI marketing technology, the AR mobile platform can arouse my shopping desire”, “I felt like the product meshed with the AR mobile platform”, “AI marketing recommendations arouse my platform usage”, and “I would prioritize the AR mobile platform over other alternative means” to reflect the user-platform interactions in an AR mobile platform environment.
Comments 8: The statistical results are reported comprehensively. However, the interpretation could be strengthened, especially for key paths like Trust → Continuance Intention and Subjective Norm → Trust, which show relatively strong effects. You can add a brief narrative in the discussion section that interprets the comparative strength of these paths would add more depth and practical relevance. Response 8: Agree. We have described this point in the data analysis and result section to emphasize this point. We have added a paragraph on the page 17 of the manuscript as below: The results showed that the effects of interactivity on spatial presence (t value = 3.667) and AI-driven recommendation toward subjective norm (t value = 4.469) are more significant. That means, interactivity as a stimuli source of information has significant impact on users’ perceived spatial presence. Whereas there is a significant influence of AI-driven recommendation toward users’ perceived subjective norm. At the same time, the relationship of subjective norm toward attitude (t = 5.893) and trust (t = 6.320) are more significant. The results reveal that the role of perceived subjective norm on developing users’ attitude and trust are notable, particularly in the context of an AR environment. Also, the relationship of trust with continuance intention (t value = 8.186) is higher than that of attitude with continuance intention (t = 3.245) (as shown in Table 5). Therefore, these critical research outcomes present the value propositions of the developed trust toward users’ continuance intention to use AR mobile platforms.
Comments 9: R² values for the dependent variables fall in the 0.16–0.24 range, which is moderate. This is acceptable in social science contexts, but the authors should briefly acknowledge this and justify its adequacy, possibly with references that support R² thresholds in behavioral research. Response 9: Agree. We have explained this point in the data analysis and result section to emphasize this point. We have added on the page 15 of the manuscript as below:
Table 4 shows that the value of R2 and collinearity statistics for all the constructs and associated relationships. The explanatory power of the proposed model was examined following the R2 value measurement, as suggested by Hsu et al. [5]. The R2 value for all the constructs of user platform interactions in an AR environment were examined, namely interactivity, product fit, AI-driven recommendation, and online review, on users’ cognitive and emotional engagement that subsequently impacts continuance intention, and the values are within the range of 0.11 - 0.25, which is recommended as a higher value. Particularly, the R2 values for all the constructs were greater than 0.19 in the study, as suggested for augmented reality by Hsu et al. [5].
Comments 10: H11 (Trust → Continuance Intention) shows the strongest influence (t = 8.19), but the narrative treats it similarly to weaker paths. In contrast, H2 (Product Fit → Spatial Presence) has the smallest coefficient, and while statistically significant, its practical weight may be modest. Consider ranking or thematically grouping the strongest vs. weaker relationships to highlight what matters most in practice. You might add a sentence or two on what this implies for UX designers or marketers. Response 10: Agree. We have revisited this point in the data analysis and result section to emphasize this point. We have added on the page 16 and 19 of the manuscript as below:
We have included two additional columns to explain the strength and concluding comments on the hypothesis/relationships in the Table 5. We revisited to explain the results, considering the hypothesis testing results (page 16): The results showed that the effects of interactivity on spatial presence (t value = 3.667) and AI-driven recommendation toward subjective norm (t value = 4.469) are more significant. That means, interactivity as a stimuli source of information has significant impact on users’ perceived spatial presence. Whereas there is a significant influence of AI-driven recommendation toward users’ perceived subjective norm. At the same time, the relationship of subjective norm toward attitude (t = 5.893) and trust (t = 6.320) are strong and highly significant. The results reveal that the role of perceived subjective norm on developing users’ attitude and trust are notable and highly significant, particularly in the context of an AR environment. Also, the relationship of trust with continuance intention (t value = 8.186) is higher than that of attitude with continuance intention (t = 3.245) (as shown in Table 5). Therefore, these critical research outcomes present a strong and highly significant relationship and suggests value propositions in developing attitude and trust toward users’ continuance intention to use AR mobile platforms.
We revisited to explain the practical implications, considering the results (page 19): From a practical perspective, this study offers valuable insights into developers and designers in designing, developing and optimizing AR mobile platforms to elevate UX and drive continued engagement. The research outcomes clearly exhibit that UX designers and marketing professionals can emphasise users’ perceived subjective norms that establishes a strong relationship toward the development of users’ attitude and trust. Further, there is a strong relationship of attitude and trust toward users’ continuance intention to use AR mobile platforms that accommodate AI-driven user-platform interactions.
Comments 11: The writing is generally clear, though some sections (particularly in the introduction and discussion) would benefit from minor rephrasing to improve flow and reduce repetition. For example: Instead of “Users engage with mobile platforms through multi-modal interactions, where interactivity can provide simulated information...” you may consider: “Users engage with mobile platforms through multi-modal interactions that simulate tailored information to meet their needs...” Response 11:
We revisited few sentences to improve the flow of sequence, especially in the introduction and discussion section. (as highlighted in pages 2-3, 18-20 of the manuscript)
Comments 12: While the discussion section is comprehensive, it occasionally repeats earlier content from the results or theoretical background sections. Consider tightening the first few paragraphs in Section 6 to focus more on “what these findings mean” rather than “what was tested.” Response 12:
We revisited few sentences to improve the discussion section. (as highlighted in pages 18-20 of the manuscript)
|
||
4. Response to Comments on the Quality of English Language |
||
Point 1: |
||
Response 1: The English is fine and does not require any improvement. |
||
5. Additional clarifications |
||
N/A |
Reviewer 2 Report
Comments and Suggestions for AuthorsHow does this study improve or differ from other models that use AI and AR in user interaction studies?
How are terms such as spatial presence, subjective norm, trust, and attitude defined in this context (in addition to the books cited)?
What sets the I-E-I model apart from earlier S-O-R-based frameworks, both theoretically and practically?
Why were interactivity, product fit, AI-driven recommendations, and online reviews selected as the sole stimulus cues?
Was there any pre-testing (for example, a pilot study or cognitive interviews) to validate the 18-item scale before deployment?
How was informed consent handled in the context of social media recruitment, and how were data protection rules (such as GDPR or Australian legislation) followed?
Author Response
Response to Reviewer 2 Comments
|
||
1. Summary |
|
|
Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files.
|
||
2. Questions for General Evaluation |
Reviewer’s Evaluation |
Response and Revisions |
Does the introduction provide sufficient background and include all relevant references? |
Must be improved |
We have revisited the introduction section with sufficient background, page 2-3 (as highlighted in the manuscript) |
Are all the cited references relevant to the research? |
Yes |
|
Is the research design appropriate? |
Must be improved |
We have revisited the method section with a design approach, page 11 - 12 (as highlighted in the manuscript) |
Are the methods adequately described? |
Must be improved |
We have revisited the method section and included in detail method, data collection method, and data analytical technique. page 12-14 (as highlighted in the manuscript) |
Are the results clearly presented? |
Must be improved |
We have revisited the result section with the findings, page 17 (as highlighted in the manuscript) |
Are the conclusions supported by the results? |
Must be improved |
We have revisited the conclusion section with the support of results, page 20 (as highlighted in the manuscript) |
Are all figures and tables clear and well presented? |
Must be improved |
We have redesigned the figure 1, page 11 (as highlighted in the manuscript) |
3. Point-by-point response to Comments and Suggestions for Authors |
||
Comments 1: How does this study improve or differ from other models that use AI and AR in user interaction studies?
|
||
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have included a paragraph in the related research section to explain how this study extends the inclusions of AI and AR, and how it can differ or improve from other models/theories. (see the highlighted texts on the page 9 of the manuscript)
In the existing literature, authors explained different models like the expectation confirmation theory (ECT) define the user confirmation through the fulfilment of their expectation that influences behavioral intentions [42]. Further, Goel et al. [27] described the consequences of visual, arousal, haptic stimuli cues through emotional states, following the stimulus-organism-response (SOR) theory, and indicated the positive outcomes toward impulsive buying intentions. Whereas, this study explains the development of users’ cognitive and emotional engagements through interaction, innovative, and immersive stimuli cues like interactivity, product fit, AI-driven recommendation, and online review that positively engage users’ attitude and trust development towards continuance intentions [45]. This study uniquely develops an interaction-engagement-intention model, following a SOR framework to explain the user platform interactions and their influences in perceiving user experience through cognitive and emotional involvements, particularly in the context of an AR environment that accommodates AI. |
||
Comments 2: How are terms such as spatial presence, subjective norm, trust, and attitude defined in this context (in addition to the books cited)?
|
||
Response 2: Agree. We have, accordingly defined the key concepts of spatial presence, subjective norm, trust, and attitude in the context of AR to emphasize this point. We have added the key definition on the page 7-8 of the manuscript as below:
Comments 3: What sets the I-E-I model apart from earlier S-O-R-based frameworks, both theoretically and practically?
Response 3: Agree. We have explained the novelty of I-E-I from existing models/theories, including the SOR to emphasize this point. We have added the key definition on the page 9 and 11 of the manuscript as below:
In the existing literature, authors explained different models like the expectation confirmation theory (ECT) define the user confirmation through the fulfilment of their expectation that influences behavioral intentions [42]. Further, Goel et al. [27] described the consequences of visual, arousal, haptic stimuli cues through emotional states, following the stimulus-organism-response (SOR) theory, and indicated the positive outcomes toward impulsive buying intentions. Whereas, this study explains the development of users’ cognitive and emotional engagements through interaction, innovative, and immersive stimuli cues like interactivity, product fit, AI-driven recommendation, and online review that positively engage users’ attitude and trust development towards continuance intentions [45]. This study uniquely develops an interaction-engagement-intention model, following a SOR framework to explain the user platform interactions and their influences in perceiving user experience through cognitive and emotional involvements, particularly in the context of an AR environment that accommodates AI.
The proposed model is assumed to be valid for the AR mobile platforms and based on the S-O-R framework. However, the proposed I-E-I model uniquely includes immersive, interactive, and innovative external sources of information in the context of user-platform interaction paradigm and explain how perceived UX through cognitive and emotional engagements influences users’ continuance intention to use AR mobile platforms.
Comments 4: Why were interactivity, product fit, AI-driven recommendations, and online reviews selected as the sole stimulus cues?
Response 4: Agree. We have explained the reasons for investigating interactivity, product fit, AI-driven recommendations, and online reviews as potential sources of information to emphasize this point. We have added a paragraph on the page 6 of the manuscript as below:
Retailers are adopting AR technology in mobile platforms to provide immersive experiences for end-users. This study considers the effects of AI interactions in AR mobile platforms in the user-platform interactions paradigm. The inclusion of AI functionalities in AR dramatically changes sources of information [14, 24, 29], and this study can distinctively embrace those stimuli cues in assessing user experience, explaining virtual engagements through multi-modal interaction capabilities. Further, examining co-created values in AI and AR are vital for platform usage and decision-making process, where interaction capabilities, virtual product placement, AI-driven recommendations, and online review can play critical role in developing user experience [5, 12, 21,36].
Comments 5: Was there any pre-testing (for example, a pilot study or cognitive interviews) to validate the 18-item scale before deployment?
Response 5: We have internally validated the measurement items from existing literature and modified them according to the research context. Though the items and scales are well established in the study area, we contextualize the items by supporting existing literature. We have explained about the specific moderation process of finalizing the measurement items to emphasize this point. We have added a paragraph on the page 12 of the manuscript as below:
Few items were adjusted in wording to good fit for the AR-based research context. For instance, the items “I could no longer doubt that the product would fit my desired spaces”, “With the support of AI marketing technology, the AR mobile platform can arouse my shopping desire”, “I felt like the product meshed with the AR mobile platform”, “AI marketing recommendations arouse my platform usage”, and “I would prioritize the AR mobile platform over other alternative means” to reflect the user-platform interactions in an AR mobile platform environment.
Comments 6: How was informed consent handled in the context of social media recruitment, and how were data protection rules (such as GDPR or Australian legislation) followed?
Response 6: Agree. We have included to explain the informed consent and ethical approval process, following Australian national ethical statement and the UTS ethical standards to emphasize this point. We have added a paragraph on the page 11 and 13 of the manuscript as below:
A descriptive quantitative method was applied through an online questionnaire survey in this study, followed by a square least square structural equation model (PLS-SEM) as a data analytical technique. Before the data analysis process, data was collected from respondents using different online platforms like LinkedIn, Facebook, Twitter, IKEA web blogs, and Nextdoor. A few pre-requisite questions were delivered to get informed consent from respondents and verified their eligibility to participate in the survey process. The quantitative survey was executed to online users who interacted with AR mobile platforms, and a convenience snowball method was applied as a data sampling technique.
We have maintained all the principles, relevant procedures, ethical regulations, and UTS guidelines to conduct the research, following the Australian national statement on ethical process and the UTS policy on ethical standards. The ethics approval has been received from the University of Technology Sydney (UTS HREC ref no. ETH-22-7706) on 9th August 2023.
|
||
4. Response to Comments on the Quality of English Language |
||
Point 1: |
||
Response 1: The English is fine and does not require any improvement. |
||
5. Additional clarifications |
||
N/A |
Reviewer 3 Report
Comments and Suggestions for AuthorsThis paper builds upon existing research on Interaction-Engagement-Intention and stimulus-organism-response theory to understand how AI and AR can be used to assess and enhance user experience in mobile applications.
Strengths:
- Good background research
- High sample size (880)
- Clear presentation of information
Recommendations for improvement:
- Authors should review their writing using spell-checking software, there were many spelling errors. Some examples:
- Line 45- Firstly. Full stop inserted instead of a comma
- Line 77- cam instead of can
- Review figure 1 - too light writing, InInteractivity (?), unclear and half-cut text
- In line 438, the authors mention having started with 1132 and going down to 880 after “data cleaning with SPSS”. Please detail the “cleaning process”.
- Supplementary Table in Appendix 1 is too crammed
- It might be worth extending the discussion with some information on the impact of the results.
Overall, the paper is scientifically sound and well-presented. I would recommend the acceptance of the paper with minor revisions.
Author Response
Response to Reviewer 3 Comments
|
||
1. Summary |
|
|
Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files.
|
||
2. Questions for General Evaluation |
Reviewer’s Evaluation |
Response and Revisions |
Does the introduction provide sufficient background and include all relevant references? |
Yes |
|
Are all the cited references relevant to the research? |
Yes |
|
Is the research design appropriate? |
Can be improved |
We have revisited the method section with a design approach, page 11 - 12 (as highlighted in the manuscript) |
Are the methods adequately described? |
Yes |
|
Are the results clearly presented? |
Yes |
|
Are the conclusions supported by the results? |
Yes |
|
Are all figures and tables clear and well presented? |
Must be improved |
We have redesigned the figure 1, page 11 (as highlighted in the manuscript) |
3. Point-by-point response to Comments and Suggestions for Authors |
||
Comments 1: Authors should review their writing using spell-checking software, there were many spelling errors. Some examples: a. Line 45- Firstly. Full stop inserted instead of a comma b. Line 77- cam instead of can
|
||
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have revisited accordingly in the manuscript. (page 2)
|
||
Comments 2: Review figure 1 - too light writing, In Interactivity (?), unclear and half-cut text
Response 2: Agree. We have, accordingly, modified the Figure 1 to visualize more. to emphasize this point. (see the page 11 of the manuscript) |
||
Comments 3: Table 1 outcomes are not very clear, might be worth adding a few paragraphs to explain these experiments in detail
|
||
Response 3: Agree. We have included a paragraph to explain about the outcomes of Table 1 to emphasize this point. We have added the key definition on the page 4 - 5 of the manuscript as below:
This study methodically reviews empirical articles published in the last four years (2021 – 2024) within the information systems, retailing, technologies, consumers, and decision-making journals, scrutinizing on the latest trend to identify knowledge gaps on the research phenomenon. The review process concludes as outlined in Table 1, explaining a synthesized summary of findings related to UX following the SOR framework in an AR environment. Particularly, the discourse on AI and AR effects in developing user experience through cognitive and emotional engagements to alleviate uncertainty and the need for touch the product [5, 27. 43], spatial presence through platform interactions [17], effects of AR interactions on behavioral responses [2, 4, 28], and subjective norms through peer influence [25]. Hsu et al. explains about the product presence in the virtual space and its’ impacts on impulse buying intentions [5], while David et al. explores the effects of aesthetics and position relevance on users’ recommendation intentions [43]. Several studies emphasize investigating users’ purchase intention through users’ preferences and subjective norms [25, 28]. However, the particular domain of continuance intention through cognitive and emotional engagements in AR environments is still unexplored, with Nikhashemi et al. [2] and Qin et al. [4] as notable extensions. While existing framework id promising, the user experience mechanism through cognitive and emotional engagements, where AI and AR involve immersive and innovative sources of information and its’ effects on users’ continuance intention remains relatively less explored. Addressing these knowledge gaps, this study extends the impacts of AI-AR oriented sources of information on continuance intention in the context of an AR environment. Particularly, the emerging field of assessing user experience through cognitive and emotional engagements – spatial presence and subjective norm within the virtual realm of continuance intention claims a novelty of research works.
Comments 4: Lines 367-368 give a definition of spatial presence that is redundant to a previous chapter (line 72)
Response 4: Agree. We have revisited the explanation about the spatial presence in the introduction section to emphasize this point. We have added the key definition on the page 2 of the manuscript as below:
AR allows mobile platforms to accommodate virtual product positioning feature [17], where this study includes the effects of embedded virtual products through spatial presence as users’ cognitive states in an AR environment [10, 18].
Comments 5: Authors should put a more detailed description of AI in the IKEA application in the Methods section
Response 5: Agree. We have included a paragraph about the AI features of IKEA in the method and materials section to emphasize this point. We have added the key definition on the page 12 of the manuscript as below:
This study accommodates the advanced features of IKEA to assess users’ cognitive and emotional engagements through sources of information. We have designed questionnaires to address the augmented reality and artificial intelligence based spatial object orientation, real-time product recognition, and user-platform interaction. IKEA applies AI models using pre-trained data to incorporate try-on measurement and allows users to scale virtual products in their required spaces [11]. Further, IKEA AR view is designed to process the sensory data and supports precise virtual product positioning through spatial reasoning and real-time rendering [7]. Moreover, the AR mobile platform utilizes AI recommendation engine to assess users’ preferences, personalized recommendations, and create user profiling mechanisms by extending predictive analytics [9, 33]. The online questionnaire, particularly for the sources of information, adapts the advanced features of IKEA on how to get users’ perceived user experience through interactivity, product fit, AI-driven recommendation, and online review.
Comments 6: From my understanding, the hypotheses and the S-O-R model seem to make some assumptions about which construct causes which emotion. I can see this in terms of the interactivity, reviews, and AI recommendations, as these are part of the platform. But what is the proof that, let’s say, spatial presence influences trust and not vice versa?
Response 6: We have internally validated through existing literature and developed hypotheses and then, externally validated the hypothesis model through quantitative data analysis. All the hypotheses were supported in the hypothesis testing and it clearly indicates that there is a significant relationship on the I-E-I construct. We investigated the results through smartPLS, and there are few indirect effects, but not significant. We have explained the hypotheses development and model construction on the page 9-12 of the manuscript.
Comments 7: In line 438, the authors mention having started with 1132 subjects and going down to 880 after “data cleaning with SPSS”. Please detail the “cleaning process”. Justify all the reasons data has been excluded.
Response 7: Agree. We have included a paragraph about the data cleaning and preparation process in the method and materials section to emphasize this point. We have added the key definition on the page 13 of the manuscript as below:
An online survey was conducted using a convenience snowball technique and collected data from online users within Australia through Facebook, Twitter, LinkedIn, and Nextdoor, with a sample size of 1132. Initially, data were exported from the Qualtrics platform in .csv format and inserted into IBMSPSS statistics 3.0 for data preparation and cleaning. We have checked and imputed quantitative data using the IBM SPSS tool to clean it through the proper validation process [17]. Then, the data imputation and cleaning were executed by checking missing value, data type verification, recoding, outlier handling, cross tabulation to identify illogical responses, and labelling [5,9]. Finally, the sample size of 880 (N=880) is confirmed, and the data file was analysed in smart PLS 4.0. The response rate was 78%.
Comments 8: It might be worth extending the discussion with some information on the impact of the results. The authors have included a short paragraph about how this information can be leveraged to make marketing applications with better UX, but no specifics on how this information can actually be used, i.e. principles to rely on or recommendations for creating such applications
Response 8: Agree. We have included a paragraph about the practical applications oriented recommendations in the discussion section to emphasize this point. We have added the key definition on the page 19 of the manuscript as below:
Mobile platform designers and developers can accommodate immersive UX through realistic rendering process through AI and AR, engage more interaction capabilities, populate AR features to social media, enhance recommendation process through predictive analytics, and emphasize on building user trust and attitude-driven mechanism. The platform industry can enhance the design and strategic capabilities throughout the product development cycle, following the research outcomes that can create more inclusive, human-centric, and sustainable.
Comments 9: It might be interesting to use objective measures instead of self-report in future studies.
Response 9: Agree. We have addressed the comments and revisited the future work in the discussion section to emphasize this point. We have added the key definition on the page 20 of the manuscript as below:
This study can be extended for other industry context like digital health monitoring, tourism, intelligent transportation, and gaming, where future research can assess user experience for the diversified user-platform interactions. Also, UX designers and developers can develop an interactive AR platform and conduct an experimental method to focus on assessing decision making for design considerations.
Comments 10: Supplementary Table in Appendix 1 is too crammed
Response 10: We have redesigned it to make it more visible. We have revisited the table A1 on the page 21-22 of the manuscript.
Comments 11: Can a supplementary list of the questions be added to the paper?
Response 11: We have included a questionnaire in Table A2 of Appendix A. We have revisited the table A1 on the page 24-25 of the manuscript.
|
||
Point 1: |
||
Response 1: The English is fine and does not require any improvement. |
||
5. Additional clarifications |
||
N/A |
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsThe article is ready for publication.