Time-Series Recommendation Quality, Algorithm Aversion, and Data-Driven Decisions: A Temporal Human–AI Interaction Perspective
Abstract
1. Introduction
2. Related Literature
3. Research Model and Hypotheses
3.1. Research Constructs
3.1.1. Time-Series Recommendation Quality: Accuracy, Novelty, and Diversity
3.1.2. Perceived Usefulness
3.1.3. Privacy Belief
3.2. Research Hypotheses
3.2.1. Time-Series Recommendation Quality and Buyers’ Algorithm Aversion
3.2.2. The Mediation of Perceived Usefulness
3.2.3. The Moderating Effect of Privacy Beliefs
4. Methodology
4.1. Data Collection
4.2. Questionnaire
4.3. Methods
5. Empirical Results
5.1. Assessment of the Measurement Model
5.2. Correlation Analysis
5.3. Regression Analysis
5.4. Mediation Analysis
5.5. Moderation Analysis
6. Conclusions and Discussion
6.1. Findings
6.2. Theoretical Contribution
6.3. Practical Implication
6.4. Limitations
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Nguyen, K.M.; Nguyen, N.T.; Ngo, N.T.Q.; Tran, N.T.H.; Nguyen, H.T.T. Investigating Consumers’ Purchase Resistance Behavior to AI-Based Content Recommendations on Short-Video Platforms: A Study of Greedy And Biased Recommendations. J. Internet Commer. 2024, 23, 284–327. [Google Scholar] [CrossRef]
- Wu, J.; Yu, H.; Zhu, Y.; Zhang, X. Impact of artificial intelligence recommendation on consumers’ willingness to adopt. J. Manag. Sci. 2020, 3, 121–125. [Google Scholar]
- Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114. [Google Scholar] [CrossRef] [PubMed]
- Chen, C.; Zheng, Y. When consumers need more interpretability of artificial intelligence (AI) recommendations? The effect of decision-making domains. Behav. Inf. Technol. 2023, 43, 3481–3489. [Google Scholar] [CrossRef]
- Chacon, A.; Reyes, T.; Kausel, E.E. Are engineers more likely to avoid algorithms after they see them err? A longitudinal study. Behav. Inf. Technol. 2024, 44, 789–804. [Google Scholar] [CrossRef]
- Shin, D.; Kee, K.F.; Shin, E.Y. Algorithm awareness: Why user awareness is critical for personal privacy in the adoption of algorithmic platforms? Int. J. Inf. Manag. 2022, 65, 102494. [Google Scholar] [CrossRef]
- Hou, T.-Y.; Tseng, Y.-C.; Yuan, C.W.T. Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust. Int. J. Inf. Manag. 2024, 76, 102775. [Google Scholar] [CrossRef]
- Xiao, L.; Kumar, V. Robotics for customer service: A useful complement or an ultimate substitute? J. Serv. Res. 2021, 24, 9–29. [Google Scholar] [CrossRef]
- Swaminathan, V.; Lepkowska-White, E.; Rao, B.P. Browsers or buyers in cyberspace? An investigation of factors influencing electronic exchange. J. Comput.-Mediated Commun. 1999, 5, JCMC523. [Google Scholar] [CrossRef]
- Vijayasarathy, L.R. Predicting consumer intentions to use on-line shopping: The case for an augmented technology acceptance model. Inf. Manag. 2004, 41, 747–762. [Google Scholar] [CrossRef]
- Nilashi, M.; Jannach, D.; bin Ibrahim, O.; Esfahani, M.D.; Ahmadi, H. Recommendation quality, transparency, and website quality for trust-building in recommendation agents. Electron. Commer. Res. Appl. 2016, 19, 70–84. [Google Scholar] [CrossRef]
- Sidlauskiene, J. What Drives Consumers’ Decisions to Use Intelligent Agent Technologies? A Systematic Review. J. Internet Commer. 2021, 21, 438–475. [Google Scholar] [CrossRef]
- Peng, C.; van Doorn, J.; Eggers, F.; Wieringa, J.E. The effect of required warmth on consumer acceptance of artificial intelligence in service: The moderating role of AI-human collaboration. Int. J. Inf. Manag. 2022, 66, 102533. [Google Scholar] [CrossRef]
- Shin, D.; Zhong, B.; Biocca, F.A. Beyond user experience: What constitutes algorithmic experiences? Int. J. Inf. Manag. 2020, 52, 102061. [Google Scholar] [CrossRef]
- Zhang, L.; Amos, C. Dignity and use of algorithm in performance evaluation. Behav. Inf. Technol. 2024, 43, 401–418. [Google Scholar] [CrossRef]
- Jin, F.; Zhang, X. Artificial intelligence or human: When and why consumers prefer AI recommendations. Inf. Technol. People 2023, 38, 279–303. [Google Scholar] [CrossRef]
- Yang, T.; Yang, F.; Men, J. Recommendation content matters! Exploring the impact of the recommendation content on consumer decisions from the means-end chain perspective. Int. J. Inf. Manag. 2023, 68, 102589. [Google Scholar] [CrossRef]
- Liang, T.-P.; Lai, H.-J.; Ku, Y.-C. Personalized content recommendation and user satisfaction: Theoretical synthesis and empirical findings. J. Manag. Inf. Syst. 2006, 23, 45–70. [Google Scholar] [CrossRef]
- Whang, C.; Im, H. Does recommendation matter for trusting beliefs and trusting intentions? Focused on different types of recommender system and sponsored recommendation. Int. J. Retail Distrib. Manag. 2018, 46, 944–958. [Google Scholar] [CrossRef]
- Roudposhti, V.M.; Nilashi, M.; Mardani, A.; Streimikiene, D.; Samad, S.; Ibrahim, O. A new model for customer purchase intention in e-commerce recommendation agents. J. Int. Stud. 2018, 11, 237–253. [Google Scholar] [CrossRef]
- Kaminskas, M.; Bridge, D. Diversity, serendipity, novelty, and coverage: A survey and empirical analysis of beyond-accuracy objectives in recommender systems. ACM Trans. Interact. Intell. Syst. 2016, 7, 1–42. [Google Scholar] [CrossRef]
- Chen, L.; Yang, Y.; Wang, N.; Yang, K.; Yuan, Q. How serendipity improves user satisfaction with recommendations? a large-scale user evaluation. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 240–250. [Google Scholar]
- Xiao, B.; Benbasat, I. E-commerce product recommendation agents: Use, characteristics, and impact. MIS Q. 2007, 31, 137–209. [Google Scholar] [CrossRef]
- Ali, M.; Zhou, L.; Miller, L.; Ieromonachou, P. User resistance in IT: A literature review. Int. J. Inf. Manag. 2016, 36, 35–43. [Google Scholar] [CrossRef]
- Luo, Y.; Zhu, G.; Qian, W.; Wu, Y.; Huang, J.; Yang, Z. Algorithm Aversion in the Era of Artificial Intelligence: Research Framework and Future Agenda. J. Manag. World 2023, 39, 205–233. [Google Scholar]
- Dhar, V. When to trust robots with decisions, and when not to. Harv. Bus. Rev. 2016, 17. Available online: https://hbr.org/2016/05/when-to-trust-robots-with-decisions-and-when-not-to (accessed on 20 October 2025).
- Longoni, C.; Cian, L. Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. J. Mark. 2022, 86, 91–108. [Google Scholar] [CrossRef]
- Gray, K.; Wegner, D.M. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 2012, 125, 125–130. [Google Scholar] [CrossRef]
- Granulo, A.; Fuchs, C.; Puntoni, S. Preference for human (vs. robotic) labor is stronger in symbolic consumption contexts. J. Consum. Psychol. 2021, 31, 72–80. [Google Scholar] [CrossRef]
- Logg, J.M.; Haran, U.; Moore, D.A. Is overconfidence a motivated bias? Experimental evidence. J. Exp. Psychol. Gen. 2018, 147, 1445. [Google Scholar] [CrossRef]
- Shaffer, V.A.; Probst, C.A.; Merkle, E.C.; Arkes, H.R.; Medow, M.A. Why do patients derogate physicians who use a computer-based diagnostic support system? Med. Decis. Mak. 2013, 33, 108–118. [Google Scholar] [CrossRef] [PubMed]
- Leung, E.; Paolacci, G.; Puntoni, S. Man versus machine: Resisting automation in identity-based consumer behavior. J. Mark. Res. 2018, 55, 818–831. [Google Scholar] [CrossRef]
- Li, S.S.; Karahanna, E. Online recommendation systems in a B2C E-commerce context: A review and future directions. J. Assoc. Inf. Syst. 2015, 16, 2. [Google Scholar] [CrossRef]
- Herlocker, J.L.; Konstan, J.A.; Terveen, L.G.; Riedl, J.T. Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. 2004, 22, 5–53. [Google Scholar] [CrossRef]
- Vargas, S.; Castells, P. Rank and relevance in novelty and diversity metrics for recommender systems. In Proceedings of the Fifth ACM Conference on Recommender Systems, Chicago, IL, USA, 23–27 October 2011; pp. 109–116. [Google Scholar]
- Ge, M.; Delgado-Battenfeld, C.; Jannach, D. Beyond accuracy: Evaluating recommender systems by coverage and serendipity. In Proceedings of the Fourth ACM Conference on Recommender Systems, Barcelona, Spain, 26–30 September 2010; pp. 257–260. [Google Scholar]
- Ekstrand, M.D.; Harper, F.M.; Willemsen, M.C.; Konstan, J.A. User perception of differences in recommender algorithms. In Proceedings of the 8th ACM Conference on Recommender Systems, Foster City, CA, USA, 6–10 October 2014; pp. 161–168. [Google Scholar]
- Said, A.; Fields, B.; Jain, B.J.; Albayrak, S. User-centric evaluation of a k-furthest neighbor collaborative filtering recommender algorithm. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 1399–1408. [Google Scholar]
- Bodoff, D.; Ho, S.Y. Effectiveness of website personalization: Does the presence of personalized recommendations cannibalize sampling of other items? Int. J. Electron. Commer. 2015, 20, 208–235. [Google Scholar] [CrossRef]
- Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, Y.; Zeng, Y.; Su, J.; Li, Z. An investigation into the acceptance of intelligent care systems: An extended technology acceptance model (TAM). Sci. Rep. 2025, 15, 17912. [Google Scholar] [CrossRef] [PubMed]
- Wu, R.; Yu, Z. Investigating users’ acceptance of the metaverse with an extended technology acceptance model. Int. J. Hum.-Comput. Interact. 2024, 40, 5810–5826. [Google Scholar] [CrossRef]
- Zarifis, A.; Kawalek, P.; Azadegan, A. Evaluating If Trust and Personal Information Privacy Concerns Are Barriers to Using Health Insurance That Explicitly Utilizes AI. J. Internet Commer. 2020, 20, 66–83. [Google Scholar] [CrossRef]
- Chang, Y.; Dong, X. Consumption information sharing behavior in virtual community. J. Intell. 2014, 33, 201–207. [Google Scholar]
- Mahmud, H.; Islam, A.N.; Ahmed, S.I.; Smolander, K. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technol. Forecast. Soc. Change 2022, 175, 121390. [Google Scholar] [CrossRef]
- Bogert, E.; Schecter, A.; Watson, R.T. Humans rely more on algorithms than social influence as a task becomes more difficult. Sci. Rep. 2021, 11, 8028. [Google Scholar] [CrossRef]
- Madhavan, P.; Wiegmann, D.A.; Lacson, F.C. Automation failures on tasks easily performed by operators undermine trust in automated aids. Hum. Factors 2006, 48, 241–256. [Google Scholar] [CrossRef]
- McNee, S.M.; Riedl, J.; Konstan, J.A. Being accurate is not enough: How accuracy metrics have hurt recommender systems. In Proceedings of the CHI’06 Extended Abstracts on Human Factors in Computing Systems, Montréal, QC, Canada, 22–27 April 2006; pp. 1097–1101. [Google Scholar]
- Lin, C.-C.; Lu, H. Towards an understanding of the behavioural intention to use a web site. Int. J. Inf. Manag. 2000, 20, 197–208. [Google Scholar] [CrossRef]
- Vervoort, A.; Declercq, P.-Y. Upward surface movement above deep coal mines after closure and flooding of underground workings. Int. J. Min. Sci. Technol. 2018, 28, 53–59. [Google Scholar] [CrossRef]
- Cremonesi, P.; Garzotto, F.; Negro, S.; Papadopoulos, A.; Turrin, R. Comparative evaluation of recommender system quality. In Proceedings of the CHI’11 Extended Abstracts on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 1927–1932. [Google Scholar]
- Ziegler, C.-N.; McNee, S.M.; Konstan, J.A.; Lausen, G. Improving recommendation lists through topic diversification. In Proceedings of the 14th International Conference on World Wide Web, Chiba, Japan, 10–14 May 2005; pp. 22–32. [Google Scholar]
- Tao, X.; Zhang, X.; Yang, J.; Shen, X.; Zhang, Z. Online Reviews, Perceived Usefulness and New Product Diffusion. Chin. Soft Sci. 2017, 1, 162–171. [Google Scholar]
- Aksoy, L.; Bloom, P.N.; Lurie, N.H.; Cooil, B. Should recommendation agents think like people? J. Serv. Res. 2006, 8, 297–315. [Google Scholar] [CrossRef]
- Komiak, S.Y.; Benbasat, I. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Q. 2006, 30, 941–960. [Google Scholar] [CrossRef]
- Chau, P.Y.; Ho, S.Y.; Ho, K.K.; Yao, Y. Examining the effects of malfunctioning personalized services on online users’ distrust and behaviors. Decis. Supp. Syst. 2013, 56, 180–191. [Google Scholar] [CrossRef]
- Tseng, H.-T. Shaping path of trust: The role of information credibility, social support, information sharing and perceived privacy risk in social commerce. Inf. Technol. People 2023, 36, 683–700. [Google Scholar] [CrossRef]
- Kock, F.; Berbekova, A.; Assaf, A.G. Understanding and managing the threat of common method bias: Detection, prevention and control. Tour. Manag. 2021, 86, 104330. [Google Scholar] [CrossRef]
- Podsakoff, P.M.; MacKenzie, S.B.; Podsakoff, N.P. Sources of method bias in social science research and recommendations on how to control it. Annu. Rev. Psychol. 2012, 63, 539–569. [Google Scholar] [CrossRef]
- Jackson, D.L. Revisiting sample size and number of parameter estimates: Some support for the N: Q hypothesis. Struct. Equ. Model. 2003, 10, 128–141. [Google Scholar] [CrossRef]
- Knijnenburg, B.P.; Willemsen, M.C.; Gantner, Z.; Soncu, H.; Newell, C. Explaining the user experience of recommender systems. User Model. User-Adapt. Interact. 2012, 22, 441–504. [Google Scholar] [CrossRef]
- Bollen, D.; Knijnenburg, B.P.; Willemsen, M.C.; Graus, M. Understanding choice overload in recommender systems. In Proceedings of the Fourth ACM Conference on Recommender Systems, Barcelona, Spain, 26–30 September 2010; pp. 63–70. [Google Scholar]
- Lin, C.S.; Wu, S.; Tsai, R.J. Integrating perceived playfulness into expectation-confirmation model for web portal context. Inf. Manag. 2005, 42, 683–693. [Google Scholar] [CrossRef]
- Moon, J.-W.; Kim, Y.-G. Extending the TAM for a World-Wide-Web context. Inf. Manag. 2001, 38, 217–230. [Google Scholar] [CrossRef]
- Rosenthal, S.; Wasenden, O.-C.; Gronnevet, G.-A.; Ling, R. A tripartite model of trust in Facebook: Acceptance of information personalization, privacy concern, and privacy literacy. Media Psychol. 2020, 23, 840–864. [Google Scholar] [CrossRef]
- Xie, X.; Du, Y.; Bai, Q. Why do people resist algorithms? From the perspective of short video usage motivations. Front. Psychol. 2022, 13, 941640. [Google Scholar] [CrossRef]
- Kusa, R.; Suder, M.; Duda, J. Role of entrepreneurial orientation, information management, and knowledge management in improving firm performance. Int. J. Inf. Manag. 2024, 78, 102802. [Google Scholar] [CrossRef]
- Reinartz, W.; Haenlein, M.; Henseler, J. An empirical comparison of the efficacy of covariance-based and variance-based SEM. Int. J. Res. Mark. 2009, 26, 332–344. [Google Scholar] [CrossRef]
- Kusa, R. The mediating role of competitive and collaborative orientations in boosting entrepreneurial orientation’s impact on firm performance. Entrep. Bus. Econ. Rev. 2023, 11, 25–42. [Google Scholar] [CrossRef]
- Sekaran, U. Research Methods for Business: A Skill Building Approach; John Wiley & Sons: New York, NY, USA, 2016. [Google Scholar]

| Indicator | Value | Frequency | Percentage | Cumulative Percentage |
|---|---|---|---|---|
| Gender | Male | 112 | 54.63 | 54.63 |
| Female | 93 | 45.37 | 100 | |
| Age | <18 | 46 | 22.44 | 22.44 |
| 18–30 | 64 | 31.22 | 53.66 | |
| 31–43 | 50 | 24.39 | 78.05 | |
| 44–56 | 32 | 15.61 | 93.66 | |
| >57 | 13 | 6.34 | 100 | |
| Education | Senior high school | 70 | 34.15 | 34.15 |
| College degree | 30 | 14.63 | 48.78 | |
| Bachelor degree | 93 | 45.37 | 94.15 | |
| Master and doctor | 12 | 5.85 | 100 | |
| Income | <2000 RMB | 48 | 23.41 | 23.41 |
| 2000–4000 | 26 | 12.68 | 36.1 | |
| 4000–6000 | 43 | 20.98 | 57.07 | |
| 6000–8000 | 61 | 29.76 | 86.83 | |
| 8000–20,000 | 25 | 12.2 | 99.02 | |
| >20,000 | 2 | 0.98 | 100 | |
| Most used platform | Taobao | 87 | 42.44 | 42.44 |
| JD | 54 | 26.34 | 68.78 | |
| Pinduoduo | 46 | 22.44 | 91.22 | |
| Other | 18 | 8.78 | 100 | |
| Frequency of online shopping | Never | 5 | 2.44 | 2.44 |
| Sometimes | 147 | 71.71 | 74.15 | |
| Often | 53 | 25.85 | 100 | |
| Spending on online shopping | <500 RMB | 90 | 43.9 | 43.9 |
| 500–1000 | 60 | 29.27 | 73.17 | |
| 1000–1500 | 36 | 17.56 | 90.73 | |
| >1500 RMB | 19 | 9.27 | 100 | |
| Total | 205 | 100 | 100 | |
| Construct | Item |
|---|---|
| Recommendation Accuracy (RA) |
|
| Recommendation Novelty (RN) |
|
| Recommendation Diversity (RD) |
|
| Perceived Usefulness (PU) |
|
| Privacy Belief (PB) |
|
| Algorithm Aversion (AA) |
|
| Variables | Item Number | Sample | Cronbach’s α |
|---|---|---|---|
| RA | 5 | 205 | 0.911 |
| RN | 4 | 205 | 0.892 |
| RD | 3 | 205 | 0.837 |
| PU | 4 | 205 | 0.902 |
| PB | 4 | 205 | 0.892 |
| AA | 4 | 205 | 0.894 |
| Variable | Bartlett’s Sphere Test | KMO Value | ||
|---|---|---|---|---|
| Approximate Chi-Square | Degree of Freedom | p-Value | ||
| RA | 654.073 | 10 | 0 | 0.89 |
| RN | 456.892 | 6 | 0 | 0.844 |
| RD | 246.697 | 3 | 0 | 0.711 |
| PU | 504.74 | 6 | 0 | 0.842 |
| PB | 461.473 | 6 | 0 | 0.841 |
| AA | 466.892 | 6 | 0 | 0.845 |
| Variable | RA | RN | RD | PU | PB | AA |
|---|---|---|---|---|---|---|
| RA | 1 | |||||
| RN | 0.888 ** | 1 | ||||
| RD | 0.851 ** | 0.868 ** | 1 | |||
| PU | 0.895 ** | 0.893 ** | 0.866 ** | 1 | ||
| PB | 0.123 | 0.075 | 0.039 | 0.083 | 1 | |
| AA | −0.914 ** | −0.908 ** | −0.862 ** | −0.903 ** | −0.046 | 1 |
| Unstandardized Coefficients | Standardized Coefficients | t | p < |t| | Collinearity Diagnostics | |||
|---|---|---|---|---|---|---|---|
| B | S.E. | Beta | VIF | Tolerance | |||
| Cons_ | 23.803 | 0.362 | - | 65.755 | 0.000 | - | - |
| RA | −0.371 | 0.046 | −0.453 *** | −8.129 | 0.000 | 5.402 | 0.185 |
| RN | −0.373 | 0.059 | −0.375 *** | −6.364 | 0.000 | 6.042 | 0.166 |
| RD | −0.202 | 0.069 | −0.151 *** | −2.936 | 0.004 | 4.612 | 0.217 |
| R2 | 0.885 | ||||||
| Adjusted R2 | 0.883 | ||||||
| F statistic | 513.481 *** | ||||||
| Durbin–Watson | 2.067 | ||||||
| Item | Total Effect | Mediating Effect | 95% BootCI | Direct Effect |
|---|---|---|---|---|
| RA=>PU=>AA | −0.371 ** | −0.073 | −0.152~−0.037 | −0.299 ** |
| RN=>PU=>AA | −0.373 ** | −0.076 | −0.140~−0.027 | −0.297 ** |
| RD=>PU=>AA | −0.202 ** | −0.072 | −0.095~−0.020 | −0.129 |
| Model 1 | Model 2 | Model 3 | |
|---|---|---|---|
| _Cons | 10.506 *** (0.147) | 10.501 *** (0.153) | 10.501 *** (0.185) |
| RA | −0.757 *** (0.023) | −0.373 *** (0.046) | −0.371 *** (0.046) |
| RN | −0.377 *** (0.061) | −0.905 *** (0.029) | −0.373 *** (0.059) |
| RD | −0.201 ** (0.069) | −0.201 ** (0.069) | −1.149 *** (0.048) |
| PB | 0.072 * (0.031) | 0.022 (0.032) | −0.015 (0.038) |
| RA × PB | −0.001 (0.006) | ||
| RN × PB | 0.001 (0.006) | ||
| RD × PB | 0.003 (0.01) | ||
| R2 | 0.84 | 0.825 | 0.742 |
| Adjusted R2 | 0.838 | 0.823 | 0.739 |
| F | 352.975 *** | 316.477 *** | 193.152 *** |
| ∆R2 | 0 | 0 | 0 |
| ∆F | 0.023 | 0.022 | 0.063 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, S.; Chen, T.; Tan, Y.; Gao, S.; Li, L. Time-Series Recommendation Quality, Algorithm Aversion, and Data-Driven Decisions: A Temporal Human–AI Interaction Perspective. Mathematics 2025, 13, 3528. https://doi.org/10.3390/math13213528
Jiang S, Chen T, Tan Y, Gao S, Li L. Time-Series Recommendation Quality, Algorithm Aversion, and Data-Driven Decisions: A Temporal Human–AI Interaction Perspective. Mathematics. 2025; 13(21):3528. https://doi.org/10.3390/math13213528
Chicago/Turabian StyleJiang, Shan, Tianyu Chen, Yufei Tan, Shiqi Gao, and Lanhao Li. 2025. "Time-Series Recommendation Quality, Algorithm Aversion, and Data-Driven Decisions: A Temporal Human–AI Interaction Perspective" Mathematics 13, no. 21: 3528. https://doi.org/10.3390/math13213528
APA StyleJiang, S., Chen, T., Tan, Y., Gao, S., & Li, L. (2025). Time-Series Recommendation Quality, Algorithm Aversion, and Data-Driven Decisions: A Temporal Human–AI Interaction Perspective. Mathematics, 13(21), 3528. https://doi.org/10.3390/math13213528

