Next Article in Journal
Information Effect of Fintech and Digital Finance on Financial Inclusion during the COVID-19 Pandemic: Global Evidence
Previous Article in Journal
An Efficient Optimization Approach for Designing Machine Models Based on Combined Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Willingness to Use Algorithms Varies with Social Information on Weak vs. Strong Adoption: An Experimental Study on Algorithm Aversion

Faculty of Business, Ostfalia University of Applied Sciences, Siegfried-Ehlers-Str. 1, D-38440 Wolfsburg, Germany
FinTech 2024, 3(1), 55-65; https://doi.org/10.3390/fintech3010004
Submission received: 27 December 2023 / Revised: 12 January 2024 / Accepted: 18 January 2024 / Published: 21 January 2024

Abstract

:
The process of decision-making is increasingly supported by algorithms in a wide variety of contexts. However, the phenomenon of algorithm aversion conflicts with the development of the technological potential that algorithms bring with them. Economic agents tend to base their decisions on those of other economic agents. Therefore, this experimental approach examines the willingness to use an algorithm when making stock price forecasts when information about the prior adoption of an algorithm is provided. It is found that decision makers are more likely to use an algorithm if the majority of preceding economic agents have also used it. Willingness to use an algorithm varies with social information about prior weak or strong adoption. In addition, the affinity for technological interaction of the economic agents shows an effect on decision behavior.

1. Introduction

Decision-making processes are increasingly supported by algorithms that are available to economic agents. This article contributes to the recent scientific discussion by focusing on the phenomenon of algorithm aversion and herd behavior. Despite the superior performance of algorithms, humans tend to distrust them and prefer human judgment. This has implications for the implementation and acceptance of algorithmic systems. At the same time, herd behavior, especially in relation to decision-making processes, is a field of research that examines the influence of social dynamics on individual decisions. This paper considers how algorithm aversion and herd behavior can interact. It intends to broaden the perspective on understanding the reasons for algorithm aversion and the mechanisms of herd behavior in human decision making. In particular, it examines how economic agents use algorithms when information about prior adoption is available. It considers how algorithm aversion changes when economic agents receive information about how often an algorithm is used by other economic agents.
In a wide variety of domains, such as asset management [1,2], justice [3,4], medicine [5,6,7], sports [8], or predictive policing [9], stochastic models or algorithms are increasingly used to make predictions. The forecasting performance of these models is often superior to the forecasting performance of humans [10,11,12,13]. Algorithms can identify complex connections in large data sets where humans reach their cognitive limits. Nonetheless, it appears that rejection of predictions by automated methods is widespread (for a literature review, see [14]), and people often opt out of using superior algorithms and instead choose less-accurate predictions by humans [15,16,17].
The negative attitude towards algorithms is referred to as algorithm aversion. This describes the fact that economic agents refrain from using an algorithm as soon as they realize that it is superior but still not error-free [15,18]. While people often respond to algorithm prediction errors by showing a strong rejection attitude toward them, it is less pronounced for prediction errors made by humans [15]. Economic agents underestimate the accuracy of stochastic models and prefer predictions made by humans [16]. Given the power of algorithms in forecasting, algorithm aversion is particularly harmful as by not using them, economic agents end up using inferior forecasts. In other words, by using human forecasts instead of algorithmic forecasts, it reduces the chance of success. Although algorithms outperform the quality of human forecasts, the maximum benefit can only be achieved in the long run if economic agents give preference to algorithmic forecasts over human forecasts (for detailed literature reviews on algorithm aversion, see [19,20,21].

2. Literature Review and Hypothesis Development

Algorithm aversion occurs primarily when economic agents interact with algorithms that do not make error-free predictions, and thus economic agents are occasionally given bad advice [15,18]. In the context of research on algorithm aversion, the decision-making behavior of economic agents is considered in different contexts. For example, the perceived objectivity of a task affects the willingness to use an algorithm. Economic agents are more willing to use an algorithm if it performs an apparently objective task rather than an apparently subjective task. However, perceived objectivity is malleable via description, and as the objectivity of a task increases, so does the willingness to use an algorithm [10]. The response time of an algorithm also has an impact on the willingness of economic agents to use it. Forecasts generated slowly by algorithms are perceived as less reliable and therefore used less frequently than forecasts that are generated quickly [22].
Economic agents who gain experience with an algorithm by working on incentivized, similar tasks under regular feedback learn to better assess the limits of their own abilities and use an algorithm more often [23]. Another approach shows that perceived learning from mistakes by algorithms and humans has an impact on algorithm aversion. After making mistakes, algorithms are perceived as less capable of learning compared to humans. However, if evidence is provided that an algorithm can learn from mistakes, this leads to higher trust and more frequent use of the algorithm [24].
There are other ways to mitigate algorithm aversion such as humanizing algorithms [10,25], providing different explanations of the automated forecast result [26], or providing a suitable representation of the automated forecast result [27]. If economic agents are granted a possibility to influence the forecast result in the form of a subsequent adjustment, the willingness to use it increases significantly. This holds true even when the possibilities for adjusting the forecast result are severely limited [28].
Economic agents tend to align their behavior with the behavior of other economic agents [29,30]. Social orientation to the behavior of others is referred to as herding behavior. This describes the phenomenon that economic agents follow the actions of other economic agents (a herd), regardless of whether the actions are rational or irrational [31]. Non-rational herd behavior occurs when economic agents blindly mimic the actions of other economic agents and largely forgo the incorporation of rational considerations into decision making [31,32]. The ability to observe the decisions of other economic agents (for example, the investment decision of a colleague) can also lead to herd behavior [32]. Thus, by imitating the actions of others, the behavior of many individual economic agents can become aligned [29,30,33].
Herd behavior is observable in stock markets, for example, during stock market crashes, which are studied in the context of investment and financial decisions [31,34,35,36]. The events surrounding GameStop’s stock from the winter of 2020 into the spring of 2021 demonstrated the powerful impact of herd behavior. Retail investors initiated a short squeeze of institutional investors who had bet on a decline in the stock price. As a result, GameStop’s stock price jumped from about USD 10 in October 2020 to as high as USD 480 in January 2021, leading to substantial losses on the part of institutional investors who had bet on a decline in the stock price [37,38,39]. In this context, Betzer and Harries [40] show a positive relationship between coordinated activities on social media and various trading measures.
The present study focuses on the decision-making behavior of economic agents who have an algorithm at their disposal during the decision process. It is possible that the influence of other economic agents’ decisions may have an effect on the extent of algorithm aversion. This paper therefore aims to investigate whether economic agents interacting with algorithms are influenced by the decisions of others and adapt their own decisions to the behavior of others. To this end, an incentivized economic experiment is conducted in which economic agents receive information about the (low or high) willingness to use an algorithm from previously deciding economic agents before deciding whether to use an algorithm themselves. It is of interest whether economic agents mimic each other’s behavior and are more or less willing to use an algorithm. Convergent social behavior [30] could lead to economic agents being more willing to use an algorithm if prior decision makers have chosen to use an algorithm by a majority (high utilization rate) and vice versa. In areas such as social commerce, it has also been shown that information about what others are doing can increase trust in new technologies and drive sales [41,42]. Alexander, Blinder and Zak [43] showed that the availability of social information about the use of an algorithm may have an impact on the willingness to use it. Therefore, it is hypothesized:
H .
Economic agents who receive information about prior high adoption of an algorithm are more likely to use an algorithm than economic agents who receive information about prior low adoption.
This could have interesting implications for practice: algorithm aversion could be reduced by providing economic agents with information about the high willingness to use the algorithm. If this contributes to a more frequent use of a powerful algorithm, an increase in economic efficiency can be enabled at the same time. Considering the forecasting success of algorithms, it is advisable to use them over forecast predictions of humans in most cases [15,16]. Nevertheless, many economic agents are reluctant to use an algorithm, thereby reducing the quality of their forecasts. They deliberately forgo the use of a superior algorithm at the expense of their forecasting success, preferring their own forecasts [20].

3. Research Methods

3.1. Participants

For participation in the economic experiment, 285 subjects were recruited online via Amazon Mechanical Turk (MTurk) and Cloud Research. 31 subjects were excluded from the analysis due to incorrect answering of at least one comprehension question (on a maximum of two attempts) or failure to pass an attention check. This leaves 254 subjects as the sample, of which 50.4% are female and 49.6% are male. The mean age is 40.6 years (σage = 10.97). The experiment was programmed as a survey in Qualtrics. The survey was conducted on 28 November 2022. The average completion time was 7.02 min. Subjects received a fixed show-up fee of USD 0.30 and a performance-based bonus that could be as high as USD 1.67.

3.2. Design

To conduct the economic experiment, a task on stock price forecasting was designed. Forecasting tasks in this domain have also been used in other studies examining decision behavior in cooperation with algorithms or stochastic models [10,16,44]. Subjects are told that the task is to make ten stock price predictions. Subjects are provided with the stock price of the A stock for periods 1 to 30 on the one hand and the stock price of the B stock for periods 1 to 20 on the other hand (Figure 1). The forecast object is the stock price of the B stock in periods 21 to 30. Subjects are further told that the companies of the A stock and B stock operate in the same industry and are therefore closely related. That is, the success of the A-stock company is closely related to the success of the B-stock company. As a result, a rising price of the A stock is likely to be accompanied by a rising price of the B stock, and vice versa. Thus, subjects can draw conclusions about the development of the price of the B stock from the development of the price of the A stock. In fact, the prices of the A stock and B stock have a correlation coefficient of 0.94 in periods 1 to 20.
Next, the subjects learn about the incentive model, which consists of two components. On the one hand, a fixed show-up fee of USD 0.30 is paid and, on the other hand, a performance-related bonus is paid, which is based on the accuracy of the forecasts made and is higher the more accurate the forecasts are. To determine the amount of the performance-related bonus, the percentage deviation from the actual stock price is calculated for each individual forecast and paid on a sliding scale (Table 1). A forecast is only rewarded if it deviates by a maximum of 15 percent from the actual stock price. In this way, a maximum payment of USD 1.97 can be achieved. The compensation is earned in Coins during the experiment and exchanged at a conversion rate of 300 Coins = USD 1 at the end of the experiment.
Subsequently, the subjects are informed that, in addition to the possibility of making their own stock price forecasts, a forecasting computer (algorithm) is available. The subjects are informed that in the past, the stock price forecasts of the algorithm deviated by a maximum of 10 percent from the actual stock price in 6 out of 10 cases. In addition, subjects receive information about the low versus high acceptance of the algorithm from the pre-survey, depending on the treatment. They are informed that the algorithm has the same information about the price trends of the A stock and B stock as the subjects. Before submitting their forecasts, the subjects have a one-time choice of whether their own stock price forecasts or the algorithm’s stock price forecasts should be used to determine the performance-based bonus. This approach is in line with other studies on algorithm aversion [15,28]. The order for displaying the two options is randomized. However, regardless of the choice, subjects must make their own stock price predictions.
The study is designed as a between-subjects design. Subjects are randomly assigned to one of two treatments. In Treatment 1 (social information about low acceptance), subjects are informed about the low acceptance to use the algorithm by other economic agents in the pre-survey, in addition to the accuracy of the algorithm. In Treatment 2 (social information about high acceptance), the subjects are informed about the high acceptance of the algorithm by other economic agents in the pre-survey, in addition to the accuracy of the algorithm.
The operation of the forecasting calculator (algorithm) is based on a linear OLS regression with the stock prices of the A stock and B stock in periods 1 to 20 (in-sample range). The resulting regression equation ( K B t = 1.43 K A t + 10.47 ) is used to forecast the prices of the B stock in the out-of-sample range of periods 21 to 30, taking the price of the A stock as the independent variable. For example, to predict the B stock price in the first period to be forecast (period 21), the A stock price of USD 65 (Figure 1) is substituted into the regression equation ( K B 21 = 1.43 × 65 + 10.47 ). This results in a forecast of the forecasting calculator of USD 103.4 for the price of the B stock in period 21, etc.
To determine the prior low or high acceptance rate of the algorithm reported in the main study, a pre-survey was conducted in the same setting. The subjects of the pre-survey (n = 29; x - Age   = 35.48; σ Age = 10.47; 41.4% female) were divided into two halves according to the achieved score after answering a questionnaire to assess affinity for technology interaction (Franke, Attig and Wessel, 2018). For the top half, 35.71% of subjects used the available algorithm and in the bottom half, 71.43% of subjects used it.

3.3. Procedure

Subjects first learn about the forecasting task and the forecasting object by reading the instructions. The graphical development of the available stock prices of the A stock and B stock is shown (Figure 1). Subjects are given information about the compensation model. Subjects are informed that a forecasting calculator can be used. They receive information about the accuracy of the forecasts of the forecasting calculator and, depending on the treatment, about the previous low or high adoption. Subsequently, the subjects answer some comprehension questions to make sure that the task has been understood. A maximum of two attempts are available for answering. In the next step, the subjects decide whether their own forecasts or the forecasts of the forecasting computer are to be used to determine the performance-related bonus. Regardless of the decision, the subjects then complete the forecasting task and submit ten of their own stock price forecasts. Subsequently, subjects answer a questionnaire consisting of nine items to assess affinity for technology interaction [45] and a short demographic questionnaire. For the attention check, the selection of an option is specified in an additional question. If this is not selected, the control is not passed. Last, subjects are informed about the success of the forecasts used and the compensation achieved (Figure 2).

4. Results

4.1. Forecast Accuracy

The analysis of the accuracy of the forecasts made, which were chosen as the basis for the compensation, shows that the stock price forecasts of the forecast calculator are more accurate than the stock price forecasts of the subjects (Table 2). While the average deviation between the predicted and actual stock price of the subjects’ forecasts who chose their own forecast as the basis for compensation is USD 20.51 (or 18.51%), the forecasts of the forecasting calculator have a forecast error of only USD 10.90 (or 8.56%) (absolute forecast error: t(252) = 16.21; p < 0.001; relative forecast error: t(252) = 14.19; p < 0.001). The subjects’ forecasts show a forecast error up to 88% higher than the forecasts of the forecasting calculator. Also, when considering the bonus paid for forecast accuracy, it can be seen that using the forecasting calculator results in a bonus that is approximately 63% higher (t(252) = 17.47; p < 0.001). Thus, in terms of forecasting success and the resulting bonus, it is advisable to use the forecasting calculator. On average, the forecasts given by the subjects lead to a lower forecast success and thus to a lower performance-related bonus.

4.2. Willingness to Use the Algorithm

Despite the higher accuracy of the algorithm’s stock price forecasts, a large proportion of the subjects refrained from using the algorithm as the basis for determining the performance-related bonus. Overall, 41.34% of the subjects preferred their own stock price forecasts to the forecasts of the forecasting calculator, thereby reducing their forecast success.
The hypothesis H states that subjects who are informed about a prior high acceptance of an algorithm use it more often than subjects who receive information about a prior low acceptance. The only difference between the two treatments is the information about the low or high adoption of the forecasting calculator, which is obtained from the pre-survey. In fact, the subjects’ decision behavior to use the forecasting calculator differs between the treatments. When informed about the low acceptance of an algorithm (T1), 51.97% of subjects used the forecasting calculator to determine the performance-based bonus (Table 3; Figure 3). In contrast, when informed about the high acceptance of an algorithm (T2), 65.35% of the subjects used the forecasting calculator (χ2 (n = 254) = 4.69; p = 0.030). Thus, H cannot be rejected. Social information about frequent use of the algorithm leads subjects to use the algorithm significantly more often. Information about prior willingness to use an algorithm from other economic agents has an impact on the decision to use an algorithm.
Further analyses show that the choice to use the algorithm is influenced by gender. In both, Treatment 1 (χ2 (n = 127) = 3.69; p = 0.055) and Treatment 2 (χ2 (n = 127) = 8.14; p = 0.004), women use the algorithm more frequently than men. The comparison of treatments by gender shows that the effect is mainly driven by women (χ2 (n = 128) = 3.20; p = 0.073) and less by men (χ2 (n = 126) = 0.70; p = 0.402). At low adoption (T1), 61.40% of women used the algorithm, whereas at high adoption (T2) 76.06% of women already used the algorithm. For men, on the other hand, at T1 (and T2, respectively), 44.29% (51.79%) used the algorithm (Table 4). In contrast, the age of the subjects shows no statistically significant effect on decisions (t(252) = 1.97; p = 0.278).
Regarding affinity for technology interaction (ATI), the subjects had a mean ATI score of 3.84 overall. An ATI score of 1.00 corresponds to a low affinity for technology interaction and an ATI score of 6.00 to a high affinity. Considering the ATI score in general, there are almost no differences between the treatments: in both treatments, the proportion of subjects showing a low ATI score is about 30% and the proportion of subjects showing a high ATI score is about 70% (Table 5). Thus, most subjects exhibit a high affinity for technological interaction. On the other hand, the differences in the decision behavior to use the algorithm considering the ATI score are notable. In general, subjects with a low ATI score show greater use of the algorithm than subjects who report a high ATI. While 64.86% of subjects who have a low ATI score use the algorithm when given information about low acceptance, as many as 84.21% of subjects who have a low ATI score choose the algorithm when given information about high acceptance (χ2 (n = 75) = 3.71; p = 0.054). Subjects who have a high ATI score use the algorithm in 46.67% of cases when they receive information about low acceptance, and in 57.30% of cases when they receive information about high acceptance (χ2 (n = 179) = 2.03; p = 0.154). Thus, in particular, subjects (84.21%) who have low ATI are more likely to be influenced and persuaded to use the algorithm by social information about high acceptance than subjects (57.30%) who have high ATI (χ2 (n = 127) = 8.52; p = 0.004).

5. Discussion

Aversion to using algorithms, which may be more successful on average, is costly in the forecasting process because algorithmic offerings go unused and decision makers do not benefit from higher accuracy that predictions from algorithms often provide [24]. Algorithm aversion is shown to decrease due to the ability to customize algorithmic prediction [28]. Nevertheless, a conflict of interests arises here: overall, while the possibility of adjustment increases the acceptance to use algorithms, the adjustments made simultaneously decrease the quality of the final decisions [46].
The results of the present study show that a reduction in algorithm aversion can be achieved even without adjusting the algorithmic prediction. Thus, any worsening of the final decisions due to adjustments can be ruled out. Economic agents also tend to be guided by the decisions of other economic agents in the process of algorithmic decision making, which is consistent with findings on herd behavior [29]. Economic agents who are informed about an algorithm’s prior strong adoption by other economic agents in addition to its accuracy are significantly more likely to use an algorithm than economic agents who are informed about its prior weak adoption in addition to its accuracy. However, the results also show that this effect occurs mainly among women and less among men. There is evidence that overconfidence can exert an influence on algorithm aversion [23]. Spiwoks and Bizer [47] show that men’s and women’s judgments diverge sharply when making stock price predictions, and women tend to be more underconfident. This could also affect the decisions to use an algorithm in the present study.
Alexander, Blinder, and Zak [43] examine willingness to use an automated aid in solving a maze in four treatments (no information, information about accuracy, and information about low or high social acceptability). In the two treatments that provide information about the social acceptance of the automated aid, subjects are presented with a social acceptance of 54% and 70%, respectively. As a result, it appears that social information about acceptance (regardless of the extent of acceptance), that is, knowledge that others have used the assistive device, is most likely to persuade economic agents to use the assistive device themselves. However, it is important to note as a limitation that the study was conducted with a small number of subjects who were assigned to two of four treatments. In addition, technical aid is not the best identifiable option and subjects must use a part of their earnings to use the aid. Last, this is not a classic prediction task that can be either automated or performed by a human, but rather an assistive device that can facilitate solving the maze on its own. The present results, unlike the results of Alexander, Blinder, and Zak [43], show that social information about high acceptance is particularly likely to persuade economic agents to use an algorithm. In contrast, when social information is about low adoption, significantly fewer economic agents are willing to use an algorithm. Thus, the willingness to use an algorithm varies with information about weak or strong adoption.
The present results also indicate that primarily economic agents with a low ATI are influenced by the decisions of other economic agents. Oddly, subjects with a low ATI score use the algorithm more often with this approach in both treatments. While tech-savvy economic agents are only slightly influenced by prior adoption information, non- or low-tech-savvy economic agents are significantly more likely to use an algorithm when given information about strong adoption than when given information about weak adoption. This could be since less tech-savvy actors are not aware of what the technology is capable of and are therefore generally more likely to be influenced by social information about usage and follow the crowd. It could also indicate that the weaker the information about the capabilities of an algorithm, the more likely economic agents are to trust the presumed “swarm intelligence” of society or the decisions of previous decision makers. Further data collection and research are required to enable a more in-depth analysis here.
Tech-savvy economic agents are by no means in favor of the use of technology of any kind. However, tech-savvy economic agents might have a higher awareness of when it makes sense to use technology and when to refrain from using it. Establishing a connection between the actual motivations for using an algorithm (e.g., accuracy, time pressure, habit, etc.) and algorithm aversion, as well as identifying additional ways to reduce algorithm aversion, is left to future research. The present study uses a stock price prediction task. Although stock price prediction can generally be considered a difficult task, the design of this study allows even non-experts to make a prediction. Nevertheless, it is possible that there are areas where decision makers are more likely to use algorithms than in other areas and these circumstances may affect outcomes. Factors other than ATI or gender may exert an influence on economic agents’ decisions to use an algorithm.

6. Conclusions

Algorithm aversion causes economic agents to refrain from using superior algorithms as soon as they realize that algorithms may be prone to error. Own forecasts are preferred to the forecasts of algorithms, which can lead to lower forecasting success overall. In the present study, it can be seen in stock price forecasts that a forecasting computer using a simple linear regression to generate its forecast is superior to the subjects’ own forecasts. Nevertheless, a large proportion of subjects refrain from using the forecast calculator and use subpar forecasts of their own.
The present study shows that the decision to use an algorithm takes into account the prior behavior of other economic agents in the decision-making process. If economic agents are informed not only about the accuracy of an algorithm, but also about a high adoption among other economic agents, they are significantly more likely to use an algorithm than if they are informed about a previous low adoption. Information about the weak or strong acceptance of an algorithm, i.e., about how other economic agents have decided, has an impact on algorithm aversion. Similarly, providing information about strong adoption leads to a reduction in algorithm aversion. The majority of economic agents decide in favor of an algorithm if the majority of the economic agents who previously made the decision have also decided in favor of an algorithm. Economic agents who have a low affinity for technology interaction (ATI) are more likely to use the algorithm than economic agents who have a high ATI due to the social information of strong adoption.
In summary, the willingness to use an algorithm varies with social information about weak or strong adoption. Thus, providing information about the high willingness of other economic agents to use an algorithm can help increase the willingness to use an algorithm in economic practice. This may contribute to an improvement in overall forecast quality and an increase in economic efficiency. Nevertheless, more research is needed to identify causes and further ways to reduce algorithm aversion.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Acknowledgments

I would like to thank Markus Spiwoks, Ibrahim Filiz, and Marco Lorenz for constructive comments and helpful discussions during the preparation of the study as well as Albert Heinecke for his support throughout the project. I also thank Galen Jiang for valuable advice in reviewing my translation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Niszczota, P.; Kaszás, D. Robo-investment aversion. PLoS ONE 2020, 15, 0239277. [Google Scholar] [CrossRef] [PubMed]
  2. Méndez-Suárez, M.; García-Fernández, F.; Gallardo, F. Artificial Intelligence Modelling Framework for Financial Automated Advising in the Copper Market. J. Open Innov. Technol. Mark. Complex. 2019, 5, 81. [Google Scholar] [CrossRef]
  3. Ireland, L. Who errs? Algorithm aversion, the source of judicial error, and public support for self-help behaviors. J. Crime Justice 2019, 43, 174–192. [Google Scholar] [CrossRef]
  4. Simpson, B. Algorithms or advocacy: Does the legal profession have a future in a digital world? Inf. Commun. Technol. Law 2016, 25, 50–61. [Google Scholar] [CrossRef]
  5. Beck, A.; Sangoi, A.; Leung, S.; Marinelli, R.J.; Nielsen, T.; Vijver, M.J.; West, R.; Rijn, M.V.; Koller, D. Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival. Sci. Transl. Med. 2011, 3, 108–113. [Google Scholar] [CrossRef] [PubMed]
  6. Ægisdóttir, S.; White, M.J.; Spengler, P.M.; Maugherman, A.S.; Anderson, L.A.; Cook, R.S.; Nichols, C.N.; Lampropoulos, G.; Walker, B.S.; Cohen, G.R.; et al. The Meta-Analysis of Clinical Judgment Project: Fifty-Six Years of Accumulated Research on Clinical versus Statistical Prediction. Couns. Psychol. 2006, 34, 341–382. [Google Scholar] [CrossRef]
  7. Grove, W.M.; Zald, D.H.; Lebow, B.S.; Snitz, B.E.; Nelson, C. Clinical versus mechanical prediction: A meta-analysis. Psychol. Assess. 2000, 12, 19–30. [Google Scholar] [CrossRef]
  8. Pérez-Toledano, M.; Rodriguez, F.J.; García-Rubio, J.; Ibáñez, S.J. Players’ selection for basketball teams, through Performance Index Rating, using multiobjective evolutionary algorithms. PLoS ONE 2019, 14, 0221258. [Google Scholar] [CrossRef]
  9. Mohler, G.O.; Short, M.B.; Malinowski, S.; Johnson, M.E.; Tita, G.E.; Bertozzi, A.; Brantingham, P.J. Randomized Controlled Field Trials of Predictive Policing. J. Am. Stat. Assoc. 2015, 110, 1399–1411. [Google Scholar] [CrossRef]
  10. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-dependent algorithm aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  11. Youyou, W.; Kosinski, M.; Stillwell, D. Computer-based personality judgments are more accurate than those made by humans. Proc. Natl. Acad. Sci. USA 2015, 112, 1036–1040. [Google Scholar] [CrossRef] [PubMed]
  12. Dawes, R.M.; Faust, D.; Meehl, P.E. Clinical versus actuarial judgment. Science 1989, 243, 1668–1674. [Google Scholar] [CrossRef] [PubMed]
  13. Meehl, P.E. Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence; University of Minnesota: Minneapolis, MN, USA, 1954. [Google Scholar]
  14. Alvarado-Valencia, J.A.; Barrero, L.H. Reliance, trust and heuristics in judgmental forecasting. Comput. Hum. Behav. 2014, 36, 102–113. [Google Scholar] [CrossRef]
  15. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
  16. Önkal, D.; Goodwin, P.; Thomson, M.E.; Gönül, S.; Pollock, A.C. The relative influence of advice from human experts and statistical methods on forecast adjustments. J. Behav. Decis. Mak. 2009, 22, 390–409. [Google Scholar] [CrossRef]
  17. Highhouse, S. Stubborn Reliance on Intuition and Subjectivity in Employee Selection. Ind. Organ. Psychol. 2008, 1, 333–342. [Google Scholar] [CrossRef]
  18. Prahl, A.; Van Swol, L. Understanding algorithm aversion: When is advice from automation discounted? J. Forecast. 2017, 36, 691–702. [Google Scholar] [CrossRef]
  19. Mahmud, H.; Islam, A.N.; Ahmed, S.I.; Smolander, K. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technol. Forecast. Soc. Chang. 2022, 175, 121390. [Google Scholar] [CrossRef]
  20. Burton, J.; Stein, M.; Jensen, T.B. A systematic review of algorithm aversion in augmented decision making. J. Behav. Decis. Mak. 2020, 33, 220–239. [Google Scholar] [CrossRef]
  21. Jussupow, E.; Benbasat, I.; Heinzl, A. Why are we averse towards Algorithms? A comprehensive literature Review on Algorithm aversion. In Proceedings of the ECIS, Online, 15–17 June 2020. [Google Scholar]
  22. Efendić, E.; Van de Calseyde, P.P.; Evans, A.M. Slow response times undermine trust in algorithmic (but not human) predictions. Organ. Behav. Hum. Decis. Process. 2020, 157, 103–114. [Google Scholar] [CrossRef]
  23. Filiz, I.; Judek, J.R.; Lorenz, M.; Spiwoks, M. Reducing algorithm aversion through experience. J. Behav. Exp. Financ. 2021, 31, 100524. [Google Scholar] [CrossRef]
  24. Reich, T.; Kaju, A.; Maglio, S.J. How to overcome algorithm aversion: Learning from mistakes. J. Consum. Psychol. 2022; 1–18, ahead-of-print. [Google Scholar]
  25. Hodge, F.D.; Mendoza, K.I.; Sinha, R.K. The effect of humanizing robo-advisors on investor judgments. Contemp. Account. Res. 2021, 38, 770–792. [Google Scholar] [CrossRef]
  26. Ben David, D.; Resheff, Y.S.; Tron, T. Explainable AI and Adoption of Algorithmic Advisors: An Experimental Study. arXiv 2021, arXiv:2101.02555. [Google Scholar]
  27. Kim, J.; Giroux, M.; Lee, J.C. When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychol. Mark. 2021, 38, 1140–1155. [Google Scholar] [CrossRef]
  28. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef]
  29. Spyrou, S.I. Herding in financial markets: A review of the literature. Rev. Behav. Financ. 2013, 5, 175–194. [Google Scholar] [CrossRef]
  30. Raafat, R.M.; Chater, N.; Frith, C. Herding in humans. Trends Cogn. Sci. 2009, 13, 420–428. [Google Scholar] [CrossRef]
  31. Baddeley, M.; Burke, C.J.; Schultz, W.; Tobler, P.N. Herding in Financial Behaviour: A Behavioural and Neuroeconomic Analysis of Individual Differences. 2012. Available online: https://www.repository.cam.ac.uk/handle/1810/257113 (accessed on 11 January 2023).
  32. Devenow, A.; Welch, I. Rational herding in financial economics. Eur. Econ. Rev. 1996, 40, 603–615. [Google Scholar] [CrossRef]
  33. Hirshleifer, D.; Hong Teoh, S. Herd behaviour and cascading in capital markets: A review and synthesis. Eur. Financ. Manag. 2003, 9, 25–66. [Google Scholar] [CrossRef]
  34. Mavruk, T. Analysis of herding behavior in individual investor portfolios using machine learning algorithms. Res. Int. Bus. Financ. 2022, 62, 101740. [Google Scholar] [CrossRef]
  35. Deng, G. The Herd Behavior of Risk-Averse Investor Based on Information Cost. J. Financ. Risk Manag. 2013, 2, 87–91. [Google Scholar] [CrossRef]
  36. Bikhchandani, S.; Sharma, S.K. Herd Behavior in Financial Markets. IMF Staff Pap. 2000, 47, 279–310. [Google Scholar] [CrossRef]
  37. Lyócsa, Š.; Baumöhl, E.; Výrost, T. YOLO trading: Riding with the herd during the GameStop episode. Financ. Res. Lett. 2021, 46, 102359. [Google Scholar] [CrossRef]
  38. Vasileiou, E.; Bartzou, E.; Tzanakis, P. Explaining Gamestop Short Squeeze using Ιntraday Data and Google Searches. J. Predict. Mark. 2021, 3805630, forthcoming. [Google Scholar] [CrossRef]
  39. Chohan, U.W. YOLO Capitalism. 2022. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3775127 (accessed on 16 January 2023).
  40. Betzer, A.; Harries, J.P. How online discussion board activity affects stock trading: The case of GameStop. Financ. Mark. Portf. Manag. 2022, 36, 443–472. [Google Scholar] [CrossRef] [PubMed]
  41. Hajli, N.; Lin, X.; Featherman, M.; Wang, Y. Social Word of Mouth: How Trust Develops in the Market. Int. J. Mark. Res. 2014, 56, 673–689. [Google Scholar] [CrossRef]
  42. Amblee, N.; Bui, T.X. Harnessing the Influence of Social Proof in Online Shopping: The Effect of Electronic Word of Mouth on Sales of Digital Microproducts. Int. J. Electron. Commer. 2011, 16, 91–114. [Google Scholar] [CrossRef]
  43. Alexander, V.; Blinder, C.; Zak, P.J. Why trust an algorithm? Performance, cognition, and neurophysiology. Comput. Hum. Behav. 2018, 89, 279–288. [Google Scholar] [CrossRef]
  44. Gubaydullina, Z.; Judek, J.R.; Lorenz, M.; Spiwoks, M. Comparing Different Kinds of Influence on an Algorithm in Its Forecasting Process and Their Impact on Algorithm Aversion. Businesses 2022, 2, 448–470. [Google Scholar] [CrossRef]
  45. Franke, T.; Attig, C.; Wessel, D. A Personal Resource for Technology Interaction: Development and Validation of the Affinity for Technology Interaction (ATI) Scale. Int. J. Hum. Comput. Interact. 2019, 35, 456–467. [Google Scholar] [CrossRef]
  46. Sele, D.; Chugunova, M. Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making. In Max Planck Institute for Innovation & Competition Research Paper No. 22-20, SSRN; Elsevier: Amsterdam, The Netherlands, 2022; p. 4285645. [Google Scholar]
  47. Spiwoks, M.; Bizer, K. On the Measurement of Overconfidence: An Experimental Study. Int. J. Econ. Financ. Res. 2018, 4, 30–37. [Google Scholar]
Figure 1. Stock price development of the A stock and B stock.
Figure 1. Stock price development of the A stock and B stock.
Fintech 03 00004 g001
Figure 2. Procedure of the study.
Figure 2. Procedure of the study.
Fintech 03 00004 g002
Figure 3. Decision behavior in the presence of social information about low vs. high acceptance.
Figure 3. Decision behavior in the presence of social information about low vs. high acceptance.
Fintech 03 00004 g003
Table 1. Grading of the performance-related bonus by accuracy of each forecast.
Table 1. Grading of the performance-related bonus by accuracy of each forecast.
Maximum Deviation in %123456789101112131415>15
Bonus in Coins50474340373330272320171310730
Table 2. Accuracy of forecasts: own forecasts vs. forecasting calculator.
Table 2. Accuracy of forecasts: own forecasts vs. forecasting calculator.
Basis of Performance-Related Bonus
Own ForecastsForecasting Calculator (Algorithm)t-Test
Ø absolute forecast
error [in USD]
20.5110.90t(252) = 16.21; p < 0.001; d = 2.06
Ø relative forecast
error [in %]
18.518.56t(252) = 14.19; p < 0.001; d = 1.80
Ø performance-related bonus [in USD]0.510.83t(252) = 17.47; p < 0.001; d = 2.22
Table 3. Decision behavior in the presence of social information about low vs. high acceptance.
Table 3. Decision behavior in the presence of social information about low vs. high acceptance.
TotalForecasting Calculator (Algorithm)Own Forecasts
nn%n%
Social low acceptance (T1)1276651.97%6148.03%
Social high acceptance (T2)1278365.35%4434.65%
Table 4. Decision-making behavior by gender.
Table 4. Decision-making behavior by gender.
GenderForecasting Calculator
(Algorithm)
Own Forecast
n%n%
Social low
acceptance (T1)
male3144.29%3955.71%
female3561.40%2238.60%
Social high
acceptance (T2)
male2951.79%2748.21%
female5476.06%1723.94%
Table 5. Decision-making behavior by ATI score.
Table 5. Decision-making behavior by ATI score.
ATI Score *TotalThereof Use AlgorithmThereof Use Own Forecasts
n%%%
Social low
acceptance (T1)
≤3.53729.13%64.86%35.14%
>3.59070.87%46.67%53.33%
Social high
acceptance (T2)
≤3.53829.92%84.21%15.79%
>3.58970.08%57.30%42.70%
* The ATI score can take values from 1.00 (low ATI) to 6.00 (high ATI).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Judek, J.R. Willingness to Use Algorithms Varies with Social Information on Weak vs. Strong Adoption: An Experimental Study on Algorithm Aversion. FinTech 2024, 3, 55-65. https://doi.org/10.3390/fintech3010004

AMA Style

Judek JR. Willingness to Use Algorithms Varies with Social Information on Weak vs. Strong Adoption: An Experimental Study on Algorithm Aversion. FinTech. 2024; 3(1):55-65. https://doi.org/10.3390/fintech3010004

Chicago/Turabian Style

Judek, Jan René. 2024. "Willingness to Use Algorithms Varies with Social Information on Weak vs. Strong Adoption: An Experimental Study on Algorithm Aversion" FinTech 3, no. 1: 55-65. https://doi.org/10.3390/fintech3010004

APA Style

Judek, J. R. (2024). Willingness to Use Algorithms Varies with Social Information on Weak vs. Strong Adoption: An Experimental Study on Algorithm Aversion. FinTech, 3(1), 55-65. https://doi.org/10.3390/fintech3010004

Article Metrics

Back to TopTop