Next Article in Journal
Evaluating Communication Features of Human Resource Management Practices: The Construction Industry in Lagos State, Nigeria
Previous Article in Journal
Associating Company-Specific Characteristics with Ownership Structure and Performance: An Analysis of Publicly Listed Firms from Selected Countries in the Eurozone during the 2008 Financial Crisis and Its Aftermath
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Different Kinds of Influence on an Algorithm in Its Forecasting Process and Their Impact on Algorithm Aversion

1
Faculty of Management, Social Work and Construction, HAWK University of Applied Sciences and Art, Haarmannplatz 3, D-37603 Holzminden, Germany
2
Faculty of Business, Ostfalia University of Applied Sciences, Siegfried-Ehlers-Str. 1, D-38440 Wolfsburg, Germany
3
Faculty of Economic Sciences, Georg August University Göttingen, Platz der Göttinger Sieben 3, D-37073 Göttingen, Germany
*
Author to whom correspondence should be addressed.
Businesses 2022, 2(4), 448-470; https://doi.org/10.3390/businesses2040029
Submission received: 22 August 2022 / Revised: 10 October 2022 / Accepted: 12 October 2022 / Published: 15 October 2022

Abstract

:
Although algorithms make more accurate forecasts than humans in many applications, decision-makers often refuse to resort to their use. In an economic experiment, we examine whether the extent of this phenomenon known as algorithm aversion can be reduced by granting decision-makers the possibility to exert an influence on the configuration of the algorithm (an influence on the algorithmic input). In addition, we replicate the study carried out by Dietvorst et al. (2018). This shows that algorithm aversion recedes significantly if the subjects can subsequently change the results of the algorithm—and even if this is only by a small percentage (an influence on the algorithmic output). The present study confirms that algorithm aversion is reduced significantly when there is such a possibility to influence the algorithmic output. However, exerting an influence on the algorithmic input seems to have only a limited ability to reduce algorithm aversion. A limited opportunity to modify the algorithmic output thus reduces algorithm aversion more effectively than having the ability to influence the algorithmic input.

1. Introduction

In many domains, the adoption of algorithmic decision-making (ADM) has helped complete tasks more accurately, safely, and profitably [1,2,3,4,5]. In contrast to the recent successes, algorithm aversion is a major barrier to the adoption of ADM systems [6,7]. If effective means can be found to overcome algorithm aversion and enable the implementation of powerful algorithms, quality of life and prosperity can be enhanced [8,9,10]. Allowing decision-makers to influence an algorithm and its prediction process has been shown to influence algorithm aversion [9]. However, it is still largely unclear which ways of influencing an algorithm are appropriate and in which step of the process decision-makers should be involved to effectively reduce their aversion. This study aims to fill this research gap by investigating different ways of influencing an algorithm and their effects on algorithm aversion in the context of an economic experiment. We draw on the research design of Dietvorst, Simmons, and Massey (2018), but also extend their work by introducing a novel method for influencing an algorithm and testing its effectiveness.
Businesses throughout the world are driving the digital transformation. Progress in the field of ADM has wide-ranging effects on our everyday lives and is bringing about fundamental changes in all fields of human life [11,12,13]. ADM systems make a considerable contribution towards tasks being completed faster and above all more cheaply [14]. In addition, algorithms can better the performance of humans (from lay persons to experts) in a multitude of areas and make more accurate predictions, including the following examples: forecasts on the performance of employees [15], the likelihood of ex-prisoners re-offending [16], or in making medical diagnoses [3,17,18,19,20].
Nevertheless, in certain fields, there is a lack of acceptance for the actual use of algorithms because subjects have reservations about them. This phenomenon, which is known as algorithm aversion, refers to the lack of trust in ADM systems which arises in subjects as soon as they recognize that the algorithms sometimes make inaccurate predictions [7,21,22]. We therefore focus on the issue of how algorithm aversion can be reduced and how the level of acceptance of algorithms can be increased.
In recent years, scholars have explored many ideas for reducing algorithm aversion. Some have proven effective, but others have not. For example, decision-making on behalf of others [23] has been shown to have no significant effect on algorithm aversion. Moreover, naming an algorithm has actually been shown to decrease willingness to use it [24].
On the other hand, considering predictions of experts using an algorithm as an input variable for the algorithm has been shown to increase willingness to use it [25]. Moreover, it has been observed that a more precise representation of the algorithmic output [26] and additional information about the process of an algorithm [27] decrease algorithm aversion. In particular, the latter implies that subjects like to exert some kind of influence on an algorithm. However, many of these tested means of reducing aversion are costly and difficult to implement in real-world scenarios, which is why it remains an important task to continue the research.
Most notably, Dietvorst, Simmons, and Massey (2018) demonstrate a way to significantly reduce algorithm aversion. In their experiment, the subjects can either choose an algorithm or make their own forecasts. Some of the subjects are—if they choose to use an algorithm—allowed to subsequently change the preliminary forecast of the algorithm by a few percentage points up or down (we describe this in our study as an opportunity to influence the ‘algorithmic output’). When they have this opportunity to make retrospective changes to the forecasts, significantly more subjects are prepared to consult the algorithm for their forecasts than otherwise [9]. However, the impact of a slight influence on the configuration of the algorithm (an influence on the algorithmic input) has not been the focus of research, a gap that the present study aims to fill.
As long as the subjects are able to change the results of the algorithm (i.e., they have an influence on the algorithmic output), algorithm aversion can be significantly reduced. Decisions in favor of an algorithm are made more frequently if the users retain an element of control over it, whereby the extent to which they are able to modify the algorithm is irrelevant. Furthermore, users who can make slight modifications report that they are no less content with the forecasting process than users who can make unlimited changes. To sum up, users will deploy algorithms more often when they have the final say in how they deal with them [9]. So, is it crucial for lowering algorithm aversion that users are given an opportunity to influence the algorithmic output, or can algorithm aversion be generally reduced by providing a way of influencing the forecasting process?
Human decision-makers want to influence algorithms instead of being at the mercy of their calculations [28,29]. In other words, decision-makers need partial control over an algorithm in order to make a decision in favor of its use. Having real or at least perceived control over the decisions to be made satisfies the psychological needs and personal interests of users [30]. This feeling of control can arise either via a real understanding of the efficiency of an algorithm or via adaptations to the algorithmic decision-making process, which have little or no influence on the functioning or level of performance of an algorithm [6]. In other words, if a user is granted control over decisions, this leads to a higher level of acceptance: if a recommendation algorithm for hotel rooms is used that only recommends hotel rooms based on the person’s previous search and purchasing behavior, the offers made are less readily accepted. However, if less than ideal offers are included, levels of acceptance of the algorithm improve [31]. Participation in the decision-making process, or a belief that one can influence the decision-making process, can contribute towards the user exhibiting greater trust in a decision [32].
Nolan and Highhouse (2014) argue that allowing subjects to modify mechanical prediction practices may enhance their perception of autonomy and thus their intentions to use them [33]. In order to expand our understanding of algorithm aversion, we grant the subjects the opportunity to interact with an algorithm, not only by modifying its predictions afterwards, but also, adding to the existing literature, by giving them an influence on the weighting of the algorithm’s input variables. Analogous to the influence on the algorithmic output (both in the present study and in [9]), we keep the extent of the subjects’ intervention in the algorithmic input small. In this way the algorithm can almost reach its maximum level of performance; however, this minor intervention could be of great significance in overcoming algorithm aversion (cf. [6]). The present study is the first one to examine whether the opportunity to adjust the weighting of an algorithm’s input factors has an effect on its acceptance. In this study, it is observed whether influencing the algorithmic input can contribute towards a reduction of algorithm aversion in the same way as influencing the algorithmic output does.
The economic experiment extends our understanding of algorithm aversion. As in previous studies, subjects do not behave at all like homo economicus. However, their algorithm aversion can be mitigated. The ability to adjust algorithm output significantly increases willingness to use it. The ability to adjust algorithmic input does not work to the same extent. We therefore advise managers dealing with algorithms to create means of influencing algorithmic output to encourage their customers to use algorithms more often.

2. Materials and Methods

Previous research indicates that economic agents interacting with ADM systems exhibit algorithm aversion and are reluctant to use them (for a synoptic literature review, see [6,21]). This behavior of not relying on an algorithm persists even when an algorithm would be more competent in fulfilling a task than other available alternatives [6,7,9,25,34,35]. For instance, economic agents are less likely to rely on share price forecasts when they have been drawn up by an algorithm instead of a human expert, which shows the phenomenon of algorithm aversion in the field of share price forecasts [36]. Other economic experiments examine the perceived task objectivity and the human-likeness of an algorithm in the context of stock index forecasting and show that the task objectivity affects the willingness to use algorithms with different human-likeness [8]. The interaction of humans and algorithms is not only a subject in the field of share price forecasts, but also linked to robo advisors in the financial market research [23].
The fact that algorithms can make more accurate predictions than human forecasters has already been shown on numerous occasions [4,5,18]. Thus, it is key to find ways to mitigate algorithm aversion so that economic agents can arrive at more successful and accurate forecasts. Algorithm aversion can be reduced by providing the opportunity to modify the algorithmic output, even when the possibilities for modification are severely limited [9].
In their literature review, Burton, Stein, and Jensen (2020) pose the question of whether the reduction of algorithm aversion by the modification of the algorithmic output can also be achieved by a modification of the algorithmic input. Even the illusion of having the freedom to act and make decisions could be a possible solution to overcome algorithm aversion [6]. Users who interact with algorithms often receive their advice from a black box whose workings are a mystery to them. They thus develop theories about which kinds of information an algorithm uses as input and how this information is exactly processed [37]. According to Colarelli and Thompson (2008), users need to at least have the feeling that they can exercise a degree of control in order to increase the acceptance of algorithms. This feeling of control can either come from a genuine understanding of how an algorithm works or by making modifications to the algorithmic decision-making process. Whether a genuine influence is exerted on the way the algorithm actually functions is not important here. It is only necessary to allow the users to have real or perceived control over decision-making in order to satisfy their need for a feeling of control [30].
Kawaguchi (2021) has taken a look at how adding an input variable—in this case the predictions made by the subjects—to an algorithm’s forecasting process influences algorithm aversion [25]. We draw on this approach and examine how an opportunity to influence the algorithmic input affects the willingness to use an algorithm. We give our subjects the possibility to influence the weighting of an input factor the algorithm uses for its predictions. In this way, we are testing an alternative approach to the reduction of algorithm aversion without influencing the algorithmic output. Since modification of the algorithmic output can also have a negative overall effect on forecasting performance, it is examined whether influencing the weighting of an input factor reduces algorithm aversion without allowing human modification of the algorithmic output. We do not want to deceive the subjects and thus give them—in the form of this input factor—the opportunity to exert an actual influence on the configuration of the forecasting computer. In this way, the subjects are given freedom to act in a limited way, which actually leads to slight differences in how the algorithm works. Thus, we address the issue of whether a general possibility to influence the algorithmic process is sufficient to reduce algorithm aversion, or whether an opportunity to influence the results themselves is necessary. We thus examine whether an opportunity to influence the weighting of the input variables of the algorithm (algorithmic input) can contribute towards a similar decrease in algorithm aversion as the opportunity to influence the algorithmic output.
To validate our results in light of previous research and to strengthen our findings, we first replicate the possibility of severely limited influence on algorithmic output [9]. We determine whether this measure can also contribute to a reduction of algorithm aversion in the domain of share price forecasts when a choice is made between an algorithm and a subject’s own forecasts. Hypothesis 1 is therefore: The proportion of decisions in favor of the algorithm will be higher when there is a limited possibility to influence the algorithmic output than when no influence is possible. Hence, null hypothesis 1 is: The proportion of decisions in favor of the algorithm will not be higher when there is a limited possibility to influence the algorithmic output than when no influence is possible.
Other studies suggest that an influence on the input of an algorithm may also reduce the extent of algorithm aversion [6,25,33,38]. In order to examine whether the possibility to influence the weighting of an algorithm’s input variables can have an effect on the willingness to use the algorithm, and thus on algorithm aversion, without the negative effects on performance of influencing algorithmic output, we formulate hypothesis 2 as follows: The proportion of decisions in favor of the algorithm will be higher when there is a limited possibility to influence the algorithmic input than when no influence is possible. Null hypothesis 2 is therefore: The proportion of decisions in favor of the algorithm will not be higher when there is a limited possibility to influence the algorithmic input than when no influence is possible.
In order to answer our research question, an economic experiment is carried out between 17–27 March 2021 in the Ostfalia Laboratory of Experimental Economic Research (OLEW) with students of the Ostfalia University of Applied Sciences in Wolfsburg. In 51 sessions, a total of 157 subjects take part in the experiment. An amount of 118 subjects (75.16%) are male, and 39 subjects (24.84%) are female. The subjects are distributed across the faculties as follows: 66 subjects (42.04%) study at the Faculty of Vehicle Technology, 56 subjects (35.67%) at the Faculty of Business, 9 subjects (5.73%) at the Faculty of Health Care, and a further 26 subjects (16.56%) at other faculties based at other locations of the Ostfalia University of Applied Sciences. Their average age is 23.6 years.
The experiment is programmed with z-Tree (cf. [39]). In the OLEW, there are twelve computer workplaces. However, only a maximum of four are used per session. This ensures that, in line with the measures to contain the COVID-19 pandemic, a considerable distance can be maintained between the subjects. The workplaces in the laboratory are also equipped with divider panels, which makes it possible to completely separate the subjects from each other. The experiments are constantly monitored by the experimenter so that communication between the subjects and the use of prohibited aids (such as smartphones) can be ruled out. Overall, a total of 51 sessions with a maximum of four subjects per session are carried out. A session lasts an average of 30 min. The detailed results of the experiment are available as Supplementary Material at the end of the article.
In our study, the subjects are asked to forecast the exact price of a share in ten consecutive periods (Appendix A). Here, the price of the share is always the result of four influencing factors (A, B, C, and D), which are supplemented by a random influence (Ɛ) (see [40,41,42,43]). First of all, the subjects are familiarized with the scenario and are informed that the influencing factors A, C, and D have a positive effect on the share price. This means that—other things being equal—when these influencing factors rise, the share price will also rise. The influencing factor B, on the other hand, has a negative effect on the share price. This means that—other things being equal—when the influencing factor B rises, the share price will fall (Table 1). In addition, the subjects are informed that the random influence (Ɛ) has an expected value of zero. However, the random influence can lead to larger or smaller deviations from the share price level which the four influencing factors would suggest.
The subjects are informed of the four influencing factors before each of the ten rounds of forecasting. In addition, they always receive a graphic insight into the historical development of the share price, the influencing factors, and the random influence in the last ten periods. In this way, the subjects can recognize in a direct comparison how the levels of the four influencing factors have an effect on the share price during the individual rounds of forecasting. Through test questions we ensure that all subjects have understood this (Appendix B).
The payment structure provides for a fixed show-up fee of EUR 4 and a performance-related element. The level of the performance-related payment is dependent on the precision of the individual share price forecasts, whereby the greater the precision of the forecasts, the higher the payment (Table 2). The subjects can thus obtain a maximum payment of EUR 16 (EUR 4 show-up fee plus EUR 12 performance-related payment from ten rounds of forecasting).
In order to help them make the share price forecasts, a forecasting computer (algorithm) is made available to the subjects. The subjects are informed that in the past the share price forecasts of the forecasting computer have achieved a payment of at least EUR 0.60 per forecast in 7 out of 10 cases. The subjects are thus aware of the fact that the algorithm they are using does not function perfectly. In order to make its forecasts, the algorithm uses the information which it has been given on the fundamental influencing factors, the direction and strength of the influence, and the random influence (Ɛ) in a way that maximizes the accuracy and thus the expected payoff. In this way, however, it by no means achieves ‘perfect’ forecasts (for a detailed description of how the algorithm works, see Appendix D). Based on the same information and the historical share prices, the subjects can make their own assessments. They would, however, be wrong to assume that they can outperform the algorithm in this way. Following the suggestions of the algorithm would thus seem to be the more sensible option. Before making their first share price forecast, the subjects make a one-off decision on whether they wish to base their payment for the subsequent ten rounds of forecasting on their own forecasts or on those made by the forecasting computer. Our set-up is oriented towards that used in the study carried out by Dietvorst, Simmons, and Massey (2018). Algorithm aversion is thus modeled as the behavior of not choosing an ADM system that would increase subjects’ payoff.
The experiment is carried out in three treatments. The 157 participants are divided up evenly over the three treatments, so that 52 subjects each carry out Treatments 1 and 2, and 53 subjects carry out Treatment 3. The distribution of the subjects among the three treatments has similarities to their distribution among the faculties as well as to their gender. The study uses a between-subjects design: each subject is assigned to only one treatment and encounters the respective decision-making situation. In Treatment 1 (no opportunity to influence the algorithm), the subjects make the decision (once only) whether they want to use their own share price forecasts as the basis for their payment or whether they want to use the share price forecasts made by the forecasting computer. Even if the subjects choose the algorithm for determining their bonus, they have to make their own forecasts. In this case, their payoff only depends on the algorithm’s forecasts, not on the forecasts made by the subjects themselves. The obligation to submit one’s own forecasts even when choosing the algorithm is based on the study by Dietvorst, Simmons, and Massey (2018). Regardless of this decision, the subjects make their own forecasts without having access to the forecast of the algorithm. (Figure A2 in Appendix C).
With Treatment 2 (opportunity to influence the algorithmic output), we intend to replicate the results of Dietvorst, Simmons, and Massey (2018). To this end, the subjects make the decision (once only) whether they solely want to use their own share price forecasts as the basis for their payment or whether they solely wish to use the share price forecasts made by the forecasting computer (which, however, can be adjusted by up to +/− EUR 5) as the basis for their performance-related payment. The algorithmic forecast is only made available to the subjects if they decide in favor of the forecasting computer (Figure A3).
In Treatment 3, we introduce the opportunity to influence the configuration of the algorithm (algorithmic input). Before handing in their first share price forecast, the subjects again make the decision (once only) whether they want to solely use their own share price forecasts as the basis for their performance-related payment or whether they want to solely use the share price forecasts made by the forecasting computer. If they decide in favor of the share price forecasts of the forecasting computer, the subjects receive a one-off opportunity to influence the configuration of the algorithm (Figure A4). To this end, they are given a more detailed explanation. The algorithm uses data on four different factors which influence the formation of the share price (A, B, C, and D). The last of these four influencing factors is identified as the sentiment of capital market participants and can be taken into account to various extents by the forecasting computer. To do so, the subjects can choose from four different levels. Whereas variant D1 attaches relatively little importance to sentiment, the extent to which sentiment is taken into account in the other variants increases continuously and is relatively strong in variant D4 (Figure 1).
Subjects who decide to use the forecasting computer in Treatment 3, and thus receive the opportunity to influence the configuration of the algorithm, have a one-off chance to change the weighting of the input variable D of the algorithm (Figure A1 in Appendix A.3). This occurs solely by means of their choice of which degree of sentiment should be taken into account (variant D1, D2, D3, or D4).

3. Results

The results show that the various possibilities to influence the forecasting process lead to different decisions on the part of the subjects. In Treatment 1 (no influence possible), 44.23% of the subjects opt for the use of the algorithm. The majority of the subjects here (55.77%) put their faith in their own forecasting abilities. In Treatment 2 (opportunity to influence the algorithmic output), on the other hand, 69.23% of the subjects decide to use the forecasting computer, and 30.77% of the subjects choose to use their own forecasts. In Treatment 3 (opportunity to influence the algorithmic input), 58.49% of the subjects decide to use the forecasting computer, and 41.51% of the subjects choose to use their own forecasts (Figure 2).
On average, across all three treatments, the subjects obtain a payment of EUR 9.57. However, there are large differences in the amounts of the payment depending on the strategy chosen. Subjects who choose their own forecasts achieve an average total payment of EUR 8.94. When the algorithm is chosen, the average payment in all three treatments is between EUR 9.99 and EUR 10.11 (Figure 3). The Wilcoxon rank-sum test shows that the average payment—regardless of the treatment—is significantly higher if the algorithm is used as the basis of the forecasts (T1: z = 4.27, p ≤ 0.001; T2: z = 3.25, p ≤ 0.001; T3: z = 5.27, p ≤ 0.001). No matter which treatment is involved, it is thus clearly in the financial interests of the subjects to put their faith in the algorithm. The algorithm consistently outperforms human judgment, yet, across all treatments, 42.68% of the subjects refrain from using it. In our study too, the phenomenon of algorithm aversion is thus evident in the field of share price forecasts [8,36].
We perform the Chi-square test on subject’s decisions between the algorithm and their own forecasts among the individual treatments. Whereas in Treatment 1 a total of 44.23% of the decisions are in favor of the algorithm, 69.23% of the subjects who can make changes to the algorithmic output (Treatment 2) decide to use the forecasting computer (𝛘2 (N = 104) = 6.62, p ≤ 0.010). Null hypothesis 1 thus has to be rejected; the opportunity to modify the algorithmic output by up to +/− EUR 5 leads to the subjects selecting the forecasting computer significantly more frequently to determine their payment.
When subjects are given the opportunity to influence the algorithmic input (Treatment 3), the majority of the subjects (58.49%) choose to use the forecasting computer (𝛘2 (N = 105) = 2.14, p ≤ 0.144). Nevertheless, null hypothesis 2 is not rejected. The possibility to influence the algorithmic input (via the extent to which the influencing factor D is taken into account) does not lead to the subjects selecting the forecasting computer significantly more often as the basis for their performance-related payment.
In each treatment, there are between 52 and 53 participants, leading to a total of 157 participants. The 67 subjects who, regardless of which treatment they are in, use their own forecasts as the basis of their payment, diverge by an average of EUR 18.28 from the actual share price and thus achieve an average bonus of EUR 0.49 per round of forecasting. The 90 subjects who decide to use the forecasts of the forecasting computer exhibit a lower average forecasting error independently of which treatment they are in. The average bonus and the average payment of the subjects who use the forecasting computer are also higher than that of subjects who rely on their own forecasting abilities. Because of the different ways in which the algorithm can be influenced, the average forecast error, average bonus per round, and average total payment also vary between treatments for those subjects who rely on the ADM (Table 3).
In Treatment 2, the subjects are given the opportunity to adapt the algorithmic output in each round of forecasting by up to +/− EUR 5. The subjects do not fully exploit the scope granted to them to exert an influence on the algorithm and make an average change to the algorithmic forecast of EUR 2.11. In Treatment 3, the subjects are given a one-off opportunity via the influencing factor D (sentiment) to exert an influence on the configuration of the algorithm (input). Eight subjects select variant D1, which takes sentiment into account to a minor extent. Eleven subjects choose to take sentiment into account to a moderate extent, seven to a considerable extent, and five to a great extent.
If the results are viewed in isolation, a similar picture is revealed. Regardless of whether subjects used their own forecasts or the forecasts of the forecasting computer to determine their payment, the average forecast error in Treatment 1 (no influence possible) is higher than in the other two treatments, which offer the subjects the opportunity to influence the algorithm. Whereas the forecasts in Treatment 1 deviate by an average of EUR 16.18 from the resulting share price, the average forecast error in Treatment 2 is EUR 15.14 and EUR 15.15 in Treatment 3. That those subjects who are given the opportunity to influence the algorithm are more successful is shown by their average bonus and higher average overall payment (Table 4).

4. Discussion

Algorithm aversion is characterized by the fact that it mostly occurs when algorithms recognizably do not function perfectly, and prediction errors occur [7]. Even when it is recognizable that the algorithm provides significantly more reliable results than humans (lay persons as well as experts), many subjects are still reluctant to trust the algorithm [9]. Due to the advancing technological transformation and the increasing availability of algorithms, it is inevitable to enhance the understanding of algorithm aversion and to study ways to mitigate it.
Previous research had shown that giving users the ability to influence algorithmic output in terms of minimal adjustments to the forecasts contributes to a significant reduction of algorithm aversion [9]. This groundbreaking finding is confirmed in the context of share price forecasts in the present paper. As shown in our introduction section, as a reaction to this interesting concept, a rich literature that focuses on further ways to mitigate algorithm has emerged [23,24,26,27].
Most noteworthy in the context of our research, Kawaguchi (2021) examined the effect of having an algorithm select its users’ individual forecasts as an additional input variable [25], and Jung and Seiter (2021) examined the effect of having subjects self-select the variables an algorithm should consider [38]. Both studies report significant changes in the extent of algorithm aversion due to the manipulation on the input they investigate. The results from the present study are in line with previous findings regarding influence on an algorithm’s output [9] but point in a different direction regarding influence on algorithm’s input [25,38].
The algorithm used in our study does not give perfect forecasts, and if there are no opportunities to influence the algorithm’s decision-making process, the majority of users choose not to use the forecasting computer. But the ability to influence algorithmic output (replicated from [9]) leads subjects to use the algorithm significantly more often compared to the control treatment, even when the amount of adjustment allowed in the process is relatively small (T1 vs. T2). By using the algorithm more frequently, the subjects also enhance their financial performance.
Our study essentially contributes to the scientific discourse by testing the possibility of influence on the weighting of the variables an algorithm uses in its forecasting process (the algorithmic input). Even though the financial performance is slightly enhanced, there is no significantly higher willingness to use the algorithm compared to the control treatment when there is a possibility to influence the input (T1 vs. T3). The assumption of Nolan and Highhouse (2014) that intention to use a forecasting aid can be improved by the possibility to influence its configuration is not confirmed [33]. We also cannot confirm Burton, Stein, and Jensen’s (2020) conjecture that changing an ADM’s input mitigates algorithm aversion, at least for the consideration of a limited influence on the weighting of the algorithmic input [6]. The differences in our results compared to Kawaguchi (2021) and Jung and Seiter (2021) are likely due to the fact that the extent of the subjects’ influence on the input of the algorithm is much smaller in the present study [25,38]. Another crucial difference is that the input factors of the algorithm in our study are predetermined, and only the weighting can be changed.
We examine whether major reductions in algorithm aversion are due to the fact that the subjects can exercise an influence on the process of algorithmic decision-making in general, or only because they can influence its forecasts. We expand the research about algorithm aversion by showing that a general opportunity to influence an algorithm is obviously not sufficient to significantly reduce algorithm aversion. Subjects want to retain control over the results and to have the final say in the decision-making process, even if this intervention is limited by considerable restrictions. Since no significantly higher willingness to use them can be achieved by adjusting the input, we recommend focusing on the output of algorithms in order to identify further possibilities for mitigating algorithm aversion. It seems to be of considerable relevance at which point in the process of algorithmic decision-making an intervention is allowed.

4.1. Implications

Nevertheless, our study has interesting implications for real-life situations. The overall financial benefit can be maximized by influencing the algorithmic output. Decision-makers tend to trust an algorithm more if they can keep the upper hand in the decision-making process. This even applies when the possibilities to exert an influence are limited. The average quality of the forecasts is slightly reduced due to the changes made by the decision-maker (Table 3), but this is over-compensated for by a significantly higher utilization rate of the—still clearly superior—algorithm, and, in a comparison between the treatments, this leads to a higher average total payment (Table 4). The opportunity to influence the algorithmic input has a similar effect with regard to the overall pecuniary benefit. The forecasts made after the subjects have made changes to the algorithm actually exhibit a slightly lower forecast error and a somewhat higher bonus. To a similar degree to which the subjects do not fully take advantage of the opportunity to influence the algorithmic output, they also fail to put their faith in the algorithm. Their average payment is nevertheless significantly higher than that of the subjects who cannot influence the algorithm. From this we conclude for real-world settings that customers should not be involved in formulas or configuration options of algorithms, but rather be given the opportunity to influence the output, for example through override functions, veto rights, emergency stop buttons, etc.

4.2. Limitations

Our study also has some limitations which should be noted. We give the subjects a genuine opportunity to influence the algorithmic input. However, we also make it clear in the instructions that the influencing factor D, which can be taken into account to different degrees, only has a moderate influence on the formation of the share price. The influencing factors A, B, and C, on the other hand, have a considerable influence. This circumstance could contribute towards the subjects not developing enough trust in their opportunity to influence the input and thus tending to rely on their own forecasts. In addition, our results were obtained in the context of share price forecasts. The validity of our results for the many other areas of ADM systems has yet to be verified.

4.3. Future Research

Future research work may wish to investigate further possibilities to reduce algorithm aversion. This study has again shown that granting subjects the opportunity to influence the algorithmic output can effectively reduce algorithm aversion. However, there is a risk that the forecasting performance of the algorithm can deteriorate as a result of the modifications. For this reason, it is important to examine alternative forms of reducing algorithm aversion. Our study has shown that modifying the algorithmic input to a small extent is only of limited use here. It would be interesting to see what happens when the possible adjustments to the algorithmic input, and thus the perceived control over the algorithm, are greater. In our study, opportunities to influence the algorithmic input cannot reduce algorithm aversion to the same extent as giving subjects the chance to influence the algorithmic output. We therefore recommend that further research be carried out to search for other alternatives to reduce algorithm aversion. One possible approach could be to merely give users the illusion of having control over the algorithmic process. In this way, algorithm aversion could be decreased without a simultaneous reduction of the forecasting quality.

5. Conclusions

In an economic experiment we examine whether providing a possibility to influence the algorithmic input contributes towards mitigating algorithm aversion. We ask subjects to make forecasts of share prices. In return, they receive a performance-related payment which increases in line with the precision of their share price forecasts. In three treatments, the subjects have a forecasting computer (algorithm) available to them that provides different options for influencing the process: In Treatment 1, we do not grant the subjects any opportunity to influence the forecasting process. In Treatment 2, the subjects can influence the algorithmic output, and in Treatment 3, they can influence the algorithmic input. In line with the literature on algorithm aversion, we show that even a considerably limited opportunity to influence the algorithmic output is able to reduce algorithm aversion significantly. However, being able to influence the algorithmic input does not lead to a significant reduction in algorithm aversion. Granting subjects a general possibility to influence the algorithmic decision-making process is therefore not a crucial factor in reducing algorithm aversion. What does lead to a significantly higher rate of using the forecasting computer is the opportunity to influence the algorithmic output. For this reason, further efforts to mitigate algorithm aversion for real-world events should focus on the possibilities of adjusting the algorithmic output.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/businesses2040029/s1, Table S1: Detailed results of the economic experiment.

Author Contributions

Conceptualization, Z.G., J.R.J., M.L. and M.S.; Software, J.R.J. and M.L.; Validation, Z.G., J.R.J., M.L. and M.S.; Formal analysis, Z.G., J.R.J., M.L. and M.S.; Data curation, J.R.J. and M.L.; Writing—original draft preparation, J.R.J., M.L. and M.S.; Writing—review and editing, Z.G., J.R.J., M.L. and M.S.; Visualization, J.R.J. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Officer and the Dean of the Faculty of Business, who are responsible for the ethical review of research at the Faculty of Business at Ostfalia University (Ostfalia Laboratory for Experimental Economics Ethics Statement, 5 September 2022).

Informed Consent Statement

All subjects gave their informed consent for inclusion before they participated in the study. The involvement of students as test subjects in economic experiments in the OLEW laboratory is regularly approved by the dean’s office of the business faculty and the research commission of the Ostfalia University of Applied Sciences. Since the test subjects only take part in an economic experiment without personal reference and without profound consequences for themselves at a computer workplace, no further ethical examination is required apart from the examination by the dean’s office of the business faculty and the research commission of the Ostfalia University of Applied Sciences. All participants were at least 18 years of age at the time of the survey and are therefore considered to be of legal age in Germany. The participants had confirmed their consent by registration in the online portal of Ostfalia University.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Instructions for the game

Appendix A.1. Instructions for the game in Treatment 1 (No Opportunity to Influence the Algorithm)

The Game

In this game, you are requested to make forecasts on the future trend of a share price. You will forecast the price movements of a share (share Z) in 10 periods.
The price of share Z is always the result of four influencing factors (A, B, C, and D) and a random influence (Ɛ). The influencing factors are announced before every round of forecasting. In addition, you receive an insight into the past development of the share price, the influencing factors, and the random influence in the last ten periods.
The influencing factors A, C, and D have a positive effect on the share price. This means that when these influencing factors rise, the share price will also tend to rise (Table A1).
The influencing factor B has a negative effect on the share price. This means that when the influencing factor B rises, the share price will tend to fall (Table A1).
Table A1. Influencing factors in the formation of the share price.
Table A1. Influencing factors in the formation of the share price.
Influencing FactorInfluenceStrength of the Influence
APositiveStrong
BNegativeStrong
CPositiveStrong
DPositiveMedium
The random influence Ɛ has an expected value of 0, but it can lead to smaller or larger deviations of the share price from the level that the influencing factors would suggest.
You can choose whether your own share price forecasts or the share price forecasts of a forecasting computer (algorithm) are used to determine your payment. Regardless of your choice, you will make your own share price forecasts.
You will receive a show-up fee of EUR 4 for participating. In addition, you receive a performance-related payment: the more accurate your share price forecasts are, the higher your payment. For each forecast made, you receive:
  • EUR 1.20 in the case of a deviation of a maximum of EUR 5 of the forecast from the actual share price;
  • EUR 0.90 in the case of a deviation of a maximum of EUR 10 of the forecast from the actual share price;
  • EUR 0.60 in the case of a deviation of a maximum of EUR 15 of the forecast from the actual share price;
  • EUR 0.30 in the case of a deviation of a maximum of EUR 20 of the forecast from the actual share price.
In the past, the share price forecasts of the algorithm have achieved a payment of at least EUR 0.60 per forecast in seven out of 10 cases.

Procedure

After reading the instructions and answering the test questions, you initially choose whether your own share price forecasts or the forecasts of the forecasting computer (algorithm) are used to determine your payment.
Following this, you will see the price history of share Z, the trend of the influencing factors, and the trend of the random influence Ɛ in the last ten periods. In addition, you will receive the influencing factors for the next period. You will be asked to forecast the trend of the share price in the next period.
After making your share price forecast, you will see the actual price of share Z. Following this, you will hand in your share price forecasts for the next period. A total of ten rounds are played.
You have a time limit of two minutes available for handing in each share price forecast.

Information

  • Please remain quiet during the experiment!
  • Please do not look at your neighbor’s screen!
  • Apart from a pen/pencil and a pocket calculator, no other aids are permitted (smartphones, smart watches etc.).
  • Only use the sheet of white paper issued to you for your notes.

Appendix A.2. Instructions for the Game in Treatment 2 (Opportunity to Influence the Algorithmic Output)

The Game

In this game, you are requested to make forecasts on the future trend of a share price. You will forecast the price movements of a share (share Z) in 10 periods.
The price of share Z is always the result of four influencing factors (A, B, C, and D) and a random influence (Ɛ). The influencing factors are announced before every round of forecasting. In addition, you receive an insight into the past development of the share price, the influencing factors, and the random influence in the last ten periods.
The influencing factors A, C, and D have a positive effect on the share price. This means that when these influencing factors rise, the share price will also tend to rise (Table A2).
The influencing factor B has a negative effect on the share price. This means that when the influencing factor B rises, the share price will tend to fall (Table A2).
Table A2. Influencing factors in the formation of the share price.
Table A2. Influencing factors in the formation of the share price.
Influencing FactorInfluenceStrength of the Influence
APositiveStrong
BNegativeStrong
CPositiveStrong
DPositiveMedium
The random influence Ɛ has an expected value of 0, but it can lead to smaller or larger deviations of the share price from the level that the influencing factors would suggest.
  • You can choose the basis that is used to determine your payment:
  • Either you can forecast the future share price yourself and forego the use of a forecasting computer (algorithm);
  • Or you can use the forecasts of the forecasting computer. If you decide to use the forecasting computer’s forecasts (algorithm), you are not bound to the exact forecast provided by the computer. You can change the computer’s proposal by up to +/− EUR 5.
You will receive a show-up fee of EUR 4 for participating. In addition, you receive a performance-related payment: the more accurate your share price forecasts are, the higher your payment. For each forecast made, you receive:
  • EUR 1.20 in the case of a deviation of a maximum of EUR 5 of the forecast from the actual share price;
  • EUR 0.90 in the case of a deviation of a maximum of EUR 10 of the forecast from the actual share price;
  • EUR 0.60 in the case of a deviation of a maximum of EUR 15 of the forecast from the actual share price;
  • EUR 0.30 in the case of a deviation of a maximum of EUR 20 of the forecast from the actual share price.
In the past, the share price forecasts of the algorithm have achieved a payment of at least EUR 0.60 per forecast in seven out of 10 cases.

Procedure

After reading the instructions and answering the test questions, you initially choose which basis is used to determine your payment. You can forecast the future share prices without the help of the forecasting computer (algorithm). Or you can use the forecasts of the forecasting computer and change them by up to +/− EUR 5.
Following this, you will see the price history of share Z, the trend of the influencing factors, and the trend of the random influence Ɛ in the last ten periods. In addition, you will receive the influencing factors for the next period. You will be asked to forecast the trend of the share price in the next period.
After making your share price forecast, you will see the actual price of share Z. Following this, you will hand in your share price forecasts for the next period. A total of ten rounds are played.
You have a time limit of two minutes available for handing in each share price forecast.

Information

  • Please remain quiet during the experiment!
  • Please do not look at your neighbor’s screen!
  • Apart from a pen/pencil and a pocket calculator, no other aids are permitted (smartphones, smart watches etc.).
  • Only use the sheet of white paper issued to you for your notes.

Appendix A.3. Instructions for the Game in Treatment 3 (Opportunity to Influence the Algorithmic Input)

The Game

In this game, you are requested to make forecasts on the future trend of a share price. You will forecast the price movements of a share (share Z) in 10 periods.
The price of share Z is always the result of four influencing factors (A, B, C, and D) and a random influence (Ɛ). The influencing factors are announced before every round of forecasting. In addition, you receive an insight into the past development of the share price, the influencing factors, and the random influence in the last ten periods.
The influencing factors A, C, and D have a positive effect on the share price. This means that when these influencing factors rise, the share price will also tend to rise (Table A3).
The influencing factor B has a negative effect on the share price. This means that when the influencing factor B rises, the share price will tend to fall (Table A3).
Table A3. Influencing factors in the formation of the share price.
Table A3. Influencing factors in the formation of the share price.
Influencing FactorInfluenceStrength of the Influence
APositiveStrong
BNegativeStrong
CPositiveStrong
DPositiveMedium
The random influence Ɛ has an expected value of 0, but it can lead to smaller or larger deviations of the share price from the level that the influencing factors would suggest.
You can choose whether your own share price forecasts or the share price forecasts of a forecasting computer (algorithm) are used to determine your payment. Regardless of your choice, you will make your own share price forecasts.
If you decide to use the forecasting computer’s forecasts (algorithm), you have the opportunity to influence the design of the algorithm.
As mentioned above, the influencing factor D also has an effect on the formation of the price alongside the influencing factors A, B, and C. The influencing factor D is the sentiment of capital market participants. The influencing factor D can be taken into account to differing extents (D1, D2, D3, or D4) (Figure A1). You decide which of these four variants should be taken into account by the forecasting computer (algorithm).
Figure A1. Variants of the influencing factor D (sentiment).
Figure A1. Variants of the influencing factor D (sentiment).
Businesses 02 00029 g0a1
You will receive a show-up fee of EUR 4 for participating. In addition, you receive a performance-related payment: the more accurate your share price forecasts are, the higher your payment. For each forecast made, you receive:
  • EUR 1.20 in the case of a deviation of a maximum of EUR 5 of the forecast from the actual share price;
  • EUR 0.90 in the case of a deviation of a maximum of EUR 10 of the forecast from the actual share price;
  • EUR 0.60 in the case of a deviation of a maximum of EUR 15 of the forecast from the actual share price;
  • EUR 0.30 in the case of a deviation of a maximum of EUR 20 of the forecast from the actual share price.
In the past, the share price forecasts of the algorithm have achieved a payment of at least EUR 0.60 per forecast in seven out of 10 cases.

Procedure

After reading the instructions and answering the test questions, you initially choose whether your own share price forecasts or the forecasts of the forecasting computer (algorithm) are used to determine your payment.
Following this, you will see the price history of share Z, the trend of the influencing factors, and the trend of the random influence Ɛ in the last ten periods. In addition, you will receive the influencing factors for the next period. You will be asked to forecast the trend of the share price in the next period.
After making your share price forecast, you will see the actual price of share Z. Following this, you will hand in your share price forecasts for the next period. A total of ten rounds are played.
You have a time limit of two minutes available for handing in each share price forecast.

Information

  • Please remain quiet during the experiment!
  • Please do not look at your neighbor’s screen!
  • Apart from a pen/pencil and a pocket calculator, no other aids are permitted (smartphones, smart watches etc.).
  • Only use the sheet of white paper issued to you for your notes.

Appendix B. Test Questions

Test question 1: For how many periods should a share price forecast be made?
(a)
5.
(b)
10. (correct)
(c)
15.
Test question 2: On which influences is the share price dependent?
(a)
Influencing factors A and B, as well as the random influence.
(b)
Influencing factors A, B, and C, as well as the random influence.
(c)
Influencing factors A, B, C, and D, as well as the random influence. (correct)
Test question 3: Which alternatives do you have when submitting your forecast?
(a)
I can only submit my own forecasts.
(b)
I can either submit my own forecasts or use a forecasting computer (algorithm). (correct)
(c)
I can either submit my own forecasts, use a forecasting computer, or consult a financial expert.
Test question 4: How much is the payment for a forecast which deviates no more than EUR 15 from the actual price?
(a)
EUR 1.20
(b)
EUR 0.90
(c)
EUR 0.60 (correct)

Appendix C. Screens

Figure A2. Screen when submitting one’s own forecasts (Treatments 1, 2, and 3).
Figure A2. Screen when submitting one’s own forecasts (Treatments 1, 2, and 3).
Businesses 02 00029 g0a2
Figure A3. Screen when influencing the algorithmic output (Treatment 2).
Figure A3. Screen when influencing the algorithmic output (Treatment 2).
Businesses 02 00029 g0a3
Figure A4. Screen when influencing the algorithmic input (Treatment 3).
Figure A4. Screen when influencing the algorithmic input (Treatment 3).
Businesses 02 00029 g0a4

Appendix D. The Functioning of the Algorithm

The mechanism with which the share price is formed functions as follows:
K t = 7 A 6 B + 5 C + 2 D + Ɛ
The level of the influencing factors A, B, C, and D are announced before every round of forecasting. The level of the random influence is not announced. What is known, however, is that the random influence has an expected value of 0. The algorithm used in this experiment is a system that exploits the given information ideally through statistical processes. In every round, the algorithm inserts the values of the four influencing factors A, B, C, and D into the formula for the formation of the price. Due to the fact that the subjects can influence the algorithmic input, the weighting of the influencing factor D can diverge somewhat in Treatment 3. For the random influence, the algorithm sets the expected value at EUR 0. The result of this equation is the forecast of the algorithm Pt (see Table A4). In period 1, the algorithm calculates as follows:
P 1 = 7 × 14 6 × 5 + 5 × 5 + 2 × 2 + 0 = 97
For the calculation of the actual price, the random influence also has an effect. In period 1, it has a value of EUR+14. The actual price is thus calculated as follows:
K 1 = 7 × 14 6 × 5 + 5 × 5 + 2 × 2 + 14 = 111
The difference between the actual share price Kt and the forecast of the algorithm Pt is the forecast error. This determines the amount of the bonus of the current forecasting round as described in accordance with the formula described in Table A4. For a forecast whose forecast error lies within the interval 10 < │Kt − Pt │ ≤ 15 for example, there is a bonus of EUR 0.60.
Table A4. Illustration of the modus operandi of the algorithm, how the share price is formed, and the calculation of the bonus.
Table A4. Illustration of the modus operandi of the algorithm, how the share price is formed, and the calculation of the bonus.
PeriodInfluencing
Factors
Forecast of the Algorithm PtRandom InfluenceActual
Price Kt
Forecast ErrorBonus
ABCD
114552€9797+€14€111111€1414€0.600.60
In practice, one can see that perfect share price forecasts are not possible, even with knowledge of the most important influencing factors. On the contrary: share price trends have a number of similarities with random processes. This circumstance is taken into account by introducing the random influence. The random influence has the effect that the algorithm cannot make perfect forecasts. The forecast error of the algorithm thus corresponds to the random influence.
In this economic experiment, the random influence consistently lies within the interval −EUR 30 ≤ Ɛ ≤ EUR 30. It is always a whole number without decimal places. The exact distribution is described in Table A5. The area −EUR 15 ≤ Ɛ ≤ EUR 15 (grey background) has a cumulative probability of 70%. For a forecast with a maximum forecasting error of EUR 15 there is a payment of EUR 0.60. In this way it can be ensured—as stated in the instructions—that the forecasts of the algorithm lead to a payment of at least EUR 0.60 in 70% of cases.
Table A5. Distribution of the random influence, which has an effect on the share price.
Table A5. Distribution of the random influence, which has an effect on the share price.
Level of the Random InfluenceProbability
−EUR 30 ≤ Ɛ ≤ −EUR 21 and EUR 21 ≤ Ɛ ≤ EUR 305% each (10%)
−EUR 20 ≤ Ɛ ≤ −EUR 16 and EUR 16 ≤ Ɛ ≤ EUR 2010% each (20%)
−EUR 15 ≤ Ɛ ≤ −EUR 11 and EUR 11 ≤ Ɛ ≤ EUR 1520% each (40%)
−EUR 10 ≤ Ɛ ≤ −EUR 6 and EUR 6 ≤ Ɛ ≤ EUR 1010% each (20%)
−EUR 5 ≤ Ɛ < EUR 0 and EUR 0 ≤ Ɛ ≤ EUR 55% each (10%)
Lines highlighted in grey add up to the 70% success probability of achieving at least EUR 0.60 per forecast of the algorithm. Cells in white correspond to the remaining 30%.
As the level of the random influence is not known when handing in a forecast, the optimal strategy is to insert the values of the influencing factors A, B, C, and D into the formula for the price formation mechanism and to assume an expected value of 0 for the random influence. This is precisely what the algorithm does. With the information available, it is thus not possible to make better forecasts than the algorithm.
When they make their own forecasts, the subjects also have the additional disadvantage that they do not know the exact formula for the price formation mechanism. They can only create an approximate picture of the price formation mechanism on the basis of examples of rounds of the game for which no payments were made (price history). For this purpose, they are provided with the exact level of the share price, the influencing factors A, B, C, and D, as well as the random influence from ten previous rounds. From this information, it is also already clear that making naïve forecasts—i.e., using the current price Kt without adaptation as a forecast for the following period Pt+1—and continuously forecasting the average price of the last ten rounds are not promising approaches.
Given the advantage that the algorithm has in terms of information, there is thus no reason to presume that the subjects could succeed in making better forecasts. In effect, they achieve an average total payment of EUR 8.94 with their approach. They are thus clearly behind the payment of EUR 10.03 obtained with the algorithm (p-value Wilcoxon rank-sum test ≤ 0.001). Decisions against using the algorithm can thus be considered algorithm aversion.

References

  1. Alexander, V.; Blinder, C.; Zak, P.J. Why trust an algorithm? Performance, cognition, and neurophysiology. Comput. Hum. Behav. 2018, 89, 279–288. [Google Scholar] [CrossRef]
  2. Youyou, W.; Kosinski, M.; Stillwell, D. Computer-based personality judgments are more accurate than those made by humans. Proc. Natl. Acad. Sci. USA 2015, 112, 1036–1040. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Beck, A.; Sangoi, A.; Leung, S.; Marinelli, R.J.; Nielsen, T.; Vijver, M.J.; West, R.; Rijn, M.V.; Koller, D. Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival. Sci. Transl. Med. 2011, 3, 108–113. [Google Scholar] [CrossRef] [Green Version]
  4. Dawes, R. The Robust Beauty of Improper Linear Models in Decision Making. Am. Psychol. 1979, 34, 571–582. [Google Scholar] [CrossRef]
  5. Meehl, P. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence; University of Minnesota Press: Minneapolis, MN, USA, 1954. [Google Scholar]
  6. Burton, J.; Stein, M.; Jensen, T. A Systematic Review of Algorithm Aversion in Augmented Decision Making. J. Behav. Decis. Mak. 2020, 33, 220–239. [Google Scholar] [CrossRef]
  7. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. J. Exp. Psychol. 2015, 144, 114–126. [Google Scholar] [CrossRef] [Green Version]
  8. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-dependent algorithm aversion. J. Mark. Res. 2020, 56, 809–825. [Google Scholar] [CrossRef]
  9. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef] [Green Version]
  10. Logg, J.M. Theory of Machine: When Do People Rely on Algorithms? Harvard Business School Working Paper 17-086; Harvard Business School: Boston, MA, USA, 2017. [Google Scholar]
  11. Mahmud, H.; Islam, A.N.; Ahmed, S.I.; Smolander, K. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technol. Forecast. Soc. Chang. 2022, 175, 121390. [Google Scholar] [CrossRef]
  12. Nagtegaal, R. The impact of using algorithms for managerial decisions on public employees’ procedural justice. Gov. Inf. Q. 2021, 38, 101536. [Google Scholar] [CrossRef]
  13. Fayyaz, Z.; Ebrahimian, M.; Nawara, D.; Ibrahim, A.; Kashef, R. Recommendation systems: Algorithms, challenges, metrics, and business opportunities. Appl. Sci. 2020, 10, 7748. [Google Scholar] [CrossRef]
  14. Upadhyay, A.K.; Khandelwal, K. Applying artificial intelligence: Implications for recruitment. Strateg. HR Rev. 2018, 17, 255–258. [Google Scholar] [CrossRef]
  15. Highhouse, S. Stubborn Reliance on Intuition and Subjectivity in Employee Selection. Organ. Psychol. 2008, 1, 333–342. [Google Scholar] [CrossRef]
  16. Wormith, J.S.; Goldstone, C.S. The Clinical and Statistical Prediction of Recidivism. Crim. Justice Behav. 1984, 11, 3–34. [Google Scholar] [CrossRef]
  17. Gladwell, M. Blink: The Power of Thinking without Thinking; Back Bay Books: New York, NY, USA, 2007. [Google Scholar]
  18. Grove, W.; Zald, D.; Lebow, B.; Snitz, B.; Nelson, C. Clinical versus mechanical prediction: A meta-analysis. Psychol. Assess. 2000, 12, 19–30. [Google Scholar] [CrossRef] [PubMed]
  19. Dawes, R.; Faust, D.; Meehl, P. Clinical Versus Actuarial Judgment. Science 1989, 243, 1668–1674. [Google Scholar] [CrossRef] [PubMed]
  20. Adams, I.; Chan, M.; Clifford, P.; Cooke, W.M.; Dallos, V.; Dombal, F.T.; Edwards, M.; Hancock, D.; Hewett, D.J.; McIntyre, N. Computer aided diagnosis of acute abdominal pain: A multicentre study. Br. Med. J. 1986, 293, 800–804. [Google Scholar] [CrossRef] [Green Version]
  21. Jussupow, E.; Benbasat, I.; Heinzl, A. Why are we averse towards Algorithms? A comprehensive literature Review on Algorithm aversion. In Proceedings of the 28th European Conference on Information Systems (ECIS), Marrakech, Morocco, 15–17 June 2020; pp. 1–16. [Google Scholar]
  22. Prahl, A.; Van Swol, L. Understanding algorithm aversion: When is advice from automation discounted? J. Forecast. 2017, 36, 691–702. [Google Scholar] [CrossRef]
  23. Filiz, I.; Judek, J.R.; Lorenz, M.; Spiwoks, M. Algorithm Aversion as an Obstacle in the Establishment of Robo Advisors. J. Risk Financ. Manag. 2022, 15, 353. [Google Scholar] [CrossRef]
  24. Hodge, F.D.; Mendoza, K.I.; Sinha, R.K. The effect of humanizing robo-advisors on investor judgments. Contemp. Account. Res. 2021, 38, 770–792. [Google Scholar] [CrossRef]
  25. Kawaguchi, K. When Will Workers Follow an Algorithm? A Field Experiment with a Retail Business. Manag. Sci. 2021, 67, 1670–1695. [Google Scholar] [CrossRef]
  26. Kim, J.; Giroux, M.; Lee, J.C. When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychol. Mark. 2021, 38, 1140–1155. [Google Scholar] [CrossRef]
  27. Ben David, D.; Resheff, Y.S.; Tron, T. Explainable AI and Adoption of Financial Algorithmic Advisors: An Experimental Study. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, 19–21 May 2021; pp. 390–400. [Google Scholar]
  28. Honeycutt, D.; Nourani, M.; Ragan, E. Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Virtual Event, 25–29 October 2020; Volume 8, pp. 63–72. [Google Scholar]
  29. Stumpf, S.; Sullivan, E.; Fitzhenry, E.; Oberst, I.; Wong, W.K.; Burnett, M. Integrating rich user feedback into intelligent user interfaces. In Proceedings of the 13th International Conference on Intelligent User Interfaces, Gran Canaria, Spain, 13–16 January 2008; pp. 50–59. [Google Scholar]
  30. Colarelli, S.M.; Thompson, M.B. Stubborn Reliance on Human Nature in Employee Selection: Statistical Decision Aids Are Evolutionarily Novel. Ind. Organ. Psychol. 2008, 1, 347–351. [Google Scholar] [CrossRef]
  31. Taylor, E.L. Making sense of “algorithm aversion”. Res. World 2017, 2017, 57. [Google Scholar] [CrossRef]
  32. Landsbergen, D.; Coursey, D.H.; Loveless, S.; Shangraw, R. Decision Quality, Confidence, and Commitment with Expert Systems: An Experimental Study. J. Public Adm. Res. Theory 1997, 7, 131–158. [Google Scholar] [CrossRef] [Green Version]
  33. Nolan, K.P.; Highhouse, S. Need for autonomy and resistance to standardized employee selection practices. Hum. Perform. 2014, 27, 328–346. [Google Scholar] [CrossRef]
  34. Berger, B.; Adam, M.; Rühr, A.; Benlian, A. Watch me improve—Algorithm aversion and demonstrating the ability to learn. Bus. Inf. Syst. Eng. 2021, 63, 55–68. [Google Scholar] [CrossRef]
  35. Efendić, E.; Van de Calseyde, P.P.; Evans, A.M. Slow response times undermine trust in algorithmic (but not human) predictions. Organ. Behav. Hum. Decis. Processes 2020, 157, 103–114. [Google Scholar] [CrossRef] [Green Version]
  36. Önkal, D.; Goodwin, P.; Thomson, M.; Gönül, S.; Pollock, A. The Relative Influence of Advice from Human Experts and Statistical Methods on Forecast Adjustments. J. Behav. Decis. Mak. 2009, 22, 390–409. [Google Scholar] [CrossRef]
  37. Logg, J.; Minson, J.; Moore, D. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Processes 2019, 151, 90–103. [Google Scholar] [CrossRef]
  38. Jung, M.; Seitere, M. Towards a better understanding on mitigating algorithm aversion in forecasting: An experimental study. J. Manag. Control 2021, 32, 495–516. [Google Scholar] [CrossRef]
  39. Fischbacher, U. z-Tree: Zurich toolbox for ready-made economic experiments. Exp. Econ. 2007, 10, 171–178. [Google Scholar] [CrossRef] [Green Version]
  40. Filiz, I.; Judek, J.R.; Lorenz, M.; Spiwoks, M. Reducing algorithm aversion through experience. J. Behav. Exp. Financ. 2021, 31, 100524. [Google Scholar] [CrossRef]
  41. Filiz, I.; Nahmer, T.; Spiwoks, M. Herd behavior and mood: An experimental study on the forecasting of share prices. J. Behav. Exp. Financ. 2019, 24, 1–10. [Google Scholar] [CrossRef]
  42. Meub, L.; Proeger, T.; Bizer, K.; Spiwoks, M. Strategic coordination in forecasting—An experimental study. Financ. Res. Lett. 2015, 13, 155–162. [Google Scholar] [CrossRef] [Green Version]
  43. Becker, O.; Leitner, J.; Leopold-Wildburger, U. Expectation formation and regime switches. Exp. Econ. 2009, 12, 350–364. [Google Scholar] [CrossRef]
Figure 1. Level of the influencing factor ‘Sentiment of capital market actors’.
Figure 1. Level of the influencing factor ‘Sentiment of capital market actors’.
Businesses 02 00029 g001
Figure 2. Comparison of the decisions in favor of the algorithm or the subjects’ own forecasts per treatment.
Figure 2. Comparison of the decisions in favor of the algorithm or the subjects’ own forecasts per treatment.
Businesses 02 00029 g002
Figure 3. Average payment in the three treatments depending on the strategy chosen when making the forecasts (own forecast or delegation to the algorithm).
Figure 3. Average payment in the three treatments depending on the strategy chosen when making the forecasts (own forecast or delegation to the algorithm).
Businesses 02 00029 g003
Table 1. Influencing factors in the formation of the share price.
Table 1. Influencing factors in the formation of the share price.
Influencing FactorInfluenceStrength of the Influence
APositiveStrong
BNegativeStrong
CPositiveStrong
DPositiveMedium
Table 2. Performance-related payment for the forecasts.
Table 2. Performance-related payment for the forecasts.
Deviation of the Forecast from the Actual Share PricePayment for the Forecast
EUR 0 ≤ |Kt − Pt| ≤ EUR 5 EUR 1.20
EUR 5 < |Kt − Pt| ≤ EUR 10EUR 0.90
EUR 10 < |Kt − Pt| ≤ EUR 15EUR 0.60
EUR 15 < |Kt − Pt| ≤ EUR 20 EUR 0.30
|Kt − Pt| > EUR 20EUR 0.00
where Kt = share price at the point of time t, Pt = forecast at the point of time t.
Table 3. Performance of the subjects in relation to their chosen strategy when making their forecasts (own forecasts or delegation to the algorithm).
Table 3. Performance of the subjects in relation to their chosen strategy when making their forecasts (own forecasts or delegation to the algorithm).
nØ Forecast Error [in EUR] *Ø Bonus PerRound [in EUR]Ø Total
Payment [in EUR]
Own forecasts6718.27760.49398.94
Forecasts by the algorithm without the opportunity to influence it (Treatment 1)2313.40000.600010.00
Forecasts by the algorithm with an
opportunity to influence the output
(Treatment 2)
3613.51670.59929.99
Forecasts by the algorithm with an
opportunity to influence the input
(Treatment 3)
3113.29680.610610.11
Total15715.48790.55669.57
* Ø Deviation between the forecasted share price and the actually occurring share price.
Table 4. Comparison of the performance of the subjects across all three treatments.
Table 4. Comparison of the performance of the subjects across all three treatments.
nØ Forecast Error [in EUR] *Ø Bonus PerRound [in EUR]Ø Total
Payment [in EUR]
No influence possible
(Treatment 1)
5216.17880.54239.42
Influence on the algorithmic output
(Treatment 2)
5215.14420.56779.68
Influence on the algorithmic input
(Treatment 3)
5315.14720.55989.60
Total15715.48790.55669.57
* Ø Deviation between the forecasted share price and the actually occurring share price.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gubaydullina, Z.; Judek, J.R.; Lorenz, M.; Spiwoks, M. Comparing Different Kinds of Influence on an Algorithm in Its Forecasting Process and Their Impact on Algorithm Aversion. Businesses 2022, 2, 448-470. https://doi.org/10.3390/businesses2040029

AMA Style

Gubaydullina Z, Judek JR, Lorenz M, Spiwoks M. Comparing Different Kinds of Influence on an Algorithm in Its Forecasting Process and Their Impact on Algorithm Aversion. Businesses. 2022; 2(4):448-470. https://doi.org/10.3390/businesses2040029

Chicago/Turabian Style

Gubaydullina, Zulia, Jan René Judek, Marco Lorenz, and Markus Spiwoks. 2022. "Comparing Different Kinds of Influence on an Algorithm in Its Forecasting Process and Their Impact on Algorithm Aversion" Businesses 2, no. 4: 448-470. https://doi.org/10.3390/businesses2040029

APA Style

Gubaydullina, Z., Judek, J. R., Lorenz, M., & Spiwoks, M. (2022). Comparing Different Kinds of Influence on an Algorithm in Its Forecasting Process and Their Impact on Algorithm Aversion. Businesses, 2(4), 448-470. https://doi.org/10.3390/businesses2040029

Article Metrics

Back to TopTop