Next Article in Journal
The EU’s Green Deal: A Third Alternative to Green Growth and Degrowth?
Next Article in Special Issue
Sustainability and Kaizen: Business Model Trends in Healthcare
Previous Article in Journal
Assessing Master Students’ Competencies Using Rubrics: Lessons Learned from Future Secondary Education Teachers
Previous Article in Special Issue
Detecting Weak Signals of the Future: A System Implementation Based on Text Mining and Natural Language Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rational Herding in Reward-Based Crowdfunding: An MTurk Experiment

1
Department of Corporate Finance and ERI-CES, University of Valencia, Avenida de los Naranjos, s/n, 46022 Valencia, Spain
2
Department of Economic Analysis and ERI-CES, University of Valencia, Avenida de los Naranjos, s/n, 46022 Valencia, Spain
3
Department of Corporate Finance, University of Valencia, Avenida de los Naranjos, s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(23), 9827; https://doi.org/10.3390/su12239827
Submission received: 20 October 2020 / Revised: 20 November 2020 / Accepted: 22 November 2020 / Published: 24 November 2020

Abstract

:
Crowdfunding is gaining popularity as a way of financing social sustainable initiatives. We performed a controlled economic experiment in MTurk by simulating a crowdfunding platform and developed a theoretical model that rationalizes herding behavior. The experiment was designed to test and quantify the causal effects of revealing specific information to prospective backers: (i) the number of early contributors already financing the project and (ii) positive opinions of other backers versus those of experts. The results show that early contributions to the campaign and positive opinions of peers act as a reinforcing signal to potential backers and affect backers’ beliefs about the probability of success, increasing contributions to the campaign. Furthermore, we show that herding is rational and set expectations on when we should observe rational herding and when not. The theoretical model captures the rational herding, which may be the main information aggregation path in reward-based crowdfunding platforms, and can help managers increase the likelihood of success in crowdfunding campaigns.

1. Introduction

Given that sustainable-oriented initiatives face considerable obstacles in raising funds from traditional channels, crowdfunding has become a fast-growing way of financing environmental and social sustainable projects through the Internet [1,2,3,4]. However, most crowdfunding campaigns do not succeed in securing funds [5]. In order to increase the success likelihood, it becomes important to understand how the crowd behaves in this context. Moreover, as Petruzzelli et al. (2019) point out, increasing the successful crowdfunding campaigns on sustainable-oriented initiatives can act as a mechanism to sensitize public opinion with regard to sustainability issues and further stimulate individuals to behave following sustainable models [4].
In crowdfunding, unlike in traditional funding methods, many individuals (the crowd) provide funds directly to entrepreneurs rather than through a financial intermediary, to whom oversight of investment has traditionally been delegated. To reflect this key distinction, crowdfunding has been explicitly defined as a venture “without standard financial intermediaries [6].” Crowdfunding platforms represent a new type of intermediary, bringing together fund seekers and a huge crowd of small fund providers [7,8,9].
Given the uncertainty about a campaign’s probability of reaching the funding goal, and problems of asymmetric information associated with entrepreneurial financing, crowdfunding platforms and fund seekers face major challenges related to the information and signals to be sent to prospective backers.
On the other hand, with so much uncertainty, herding is common in all types of crowdfunding [10,11]. Herding can be described in terms of imitating the majority. Given the widespread phenomenon of herding in crowdfunding, understanding the mechanisms that drive herding is of immense importance.
Specifically, knowing whether herding in crowdfunding is rational would help measure the herding causal effect—rational observational learners interpret the herd by making unbiased inferences from the decisions they observe [12]. In sequential choice settings, it may be optimal, ex ante, to imitate observed behaviors [13,14]. Thus, rational herding requires observers to make unbiased inferences from the decisions they observe. If irrational herding were assumed, the herding effect would be underestimated by ignoring powerful rational drivers that add to irrational herding behavior. We show that herding in this context is rational and set expectations about when we observe rational herding and when not.
Therefore, learning how rationality may be integrated with herding behavior is important to design management strategies to maneuver the herd in order for environmental and social sustainable enterprises to succeed in crowdfunding.
Previous empirical research has explored herd behavior related to (i) early contributions and to (ii) the influence of peers’ and experts’ recommendations in crowdfunding and has shown the importance of conditions related to high funding achievement [10,11,15,16,17,18]. However, while empirical studies generally confirm that previous backers and peers’ and experts’ recommendations relate to high funding achievement, they provide an incomplete insight into the degree to which those pieces of information distort the allocation of resources, and fail to offer direct causality. For example, successfully funded initiatives may have accomplished the funding goal due to past positive ratings, or, alternatively, they may have got past positive ratings because they are of the highest quality, and thus achieve the funding goal.
These empirical limitations may be overcome through randomized experiments. Previous experiments on social influence on the internet and crowdfunding [19,20,21,22,23,24,25] generally find that early contributions and positive opinions explain significant increments in subsequent contributions, which suggest that these results will become generalized.
We go further and present an ad hoc controlled online economic experiment where subjects were rewarded depending on their decisions and those of others. Thus, information and payoff externalities are present, as occurs in crowdfunding markets. In this way, the experiment allows us to analyze the influence of rational herding on decisions in crowdfunding.
The controlled economic experiment through Amazon Mechanical Turk (MTurk) simulated a crowdfunding platform. Mturk is suitable for experiments with a large number of experimental subjects, 847 in this case. Additionally, the subject pool was diverse, including residents in the United States and in India, and an adjusted proportion of men and women (250 men and 250 women from the United States; 250 men and 97 women from India), instead of the usual undergraduate population.
The experiment was designed to test the causal effects of revealing specific information to prospective contributors (i.e., backers) of a reward-based crowdfunding platform. More precisely, the experiment tested the causal effects of two pieces of information on choices by prospective female and male contributors (backers): (i) the number of early contributors already financing the project; and (ii) the positive opinions of other backers and/or experts.
The results of the controlled economic experiment show that early contributions affect backers’ beliefs about the campaign’s probability of success, thereby increasing contributions to the campaign. The results also confirm that positive opinions of peers, as shown nowadays in online social networks, are more important than experts’ comments in increasing campaign contributions. Positive opinions of peers act as a proxy for subsequent contributions.
In this paper, we also develop a model that captures rational herding and shows that it acts as the main information aggregation path in reward-based crowdfunding platforms. Herding is shown in the experimental setting and supported by the theoretical model. Revealing information influences backers’ beliefs regarding projects’ probability of success and, consequently, alters backers’ choices.
Our contribution to the rational herding literature in crowdfunding is twofold. On the experimental side: this is the first controlled experiment in crowdfunding with a large and diversified number of participants (men and women, from the United Sates and from India), that tests the causal effect and quantifies the extent by which backers respond to the information announced by a crowdfunding platform. On the theoretical side: we give theoretical support to our experimental results by modeling backers’ beliefs as a random variable, whose mean is updated over time, as new information is announced. Thus, we analyze the backers’ best response to the platform actions. The impact of information on choices is analyzed by the comparison of the posterior distributions conditional to choices. Backers’ best responses to information provide a good guide for crowdfunding platforms’ future decisions on which information to release in crowdfunding campaigns. Finally, we show that herding is rational and set expectations about when we should observe rational herding and when we should not.
The paper is organized as follows. Section 2 provides an overview of the literature. Section 3 presents the theoretical model and describes the experimental design and procedures. Section 4 presents the experimental results and their theoretical analysis. Section 5 concludes the paper by offering conclusions and managerial implications.

2. Background

The concept of crowdfunding is derived from a broader concept, namely crowdsourcing [26]. In crowdsourcing, a task previously performed by a bank employee is outsourced to a crowd of people in the form of an online open call [27]. This online open call reaches the crowd through a crowdfunding platform. Project creators post their projects and define a reward scheme (a menu of reward items and their prices) to attract backers. Information between project creators and backers is asymmetric [28]. Creators know the real quality of their projects and have a better idea of the funding probability of success, whereas backers do not. Backers lack the necessary information to properly estimate the chances of success of the proposed campaign.
The four major crowdfunding models—donation-based crowdfunding, reward-based crowdfunding, crowdinvesting, and crowdlending—differ in terms of the reward that backers receive. In donation-based crowdfunding, backers pledge funds but receive no financial compensation; crowdinvesting refers to participation by multiple individuals in the uncertain future cash flows of a firm or project in the form of equity, mezzanine, or debt finance; crowdlending provides fund seekers with fixed-interest loans to be repaid to a large number of lenders [29].
Of these four models, reward-based crowdfunding, the model examined in this research, is primarily used by entrepreneurs to finance the manufacture of new products or services. Backers are compensated with either a tangible reward (e.g., a sample of the final product) or an intangible reward (e.g., having their name written on the product packaging). It has been pointed out that reward-based crowdfunding has the potential to democratize the access to innovation and entrepreneurship [30], and therefore the potential to impact new initiatives aimed at sustainability. Additionally, when asymmetric information is important, it has been argued that high-quality projects prefer reward-based crowdfunding [31]; although other authors, as Belleflamme et al. (2014), claim that asymmetric information favors equity-based crowdfunding.
Kickstarter, which is one of the world’s largest platforms connecting fund seekers with contributors, provides a useful example to explain the dynamics of reward-based crowdfunding. Kickstarter focuses on creative projects and does not accept charitable causes. Members, after joining the online community, can ask for funding for their ideas, contribute to others, and post comments. On one side are the members aiming to undertake a project (creators). They must publish a description of the deliverables to be produced with the contributed funds, along with visual content, a statement of the purpose of the project, the funding goal, and the duration of the campaign. During the funding cycle, creators can post updates to encourage additional support for their projects.
Funding is provided on an all-or-nothing basis. Although backers are refunded if the campaign fails (i.e., if the project does not reach its funding goal), backers face a monetary (payoff externality) and a non-monetary opportunity cost when the fundraising goal is not achieved [32]. Supporters are primarily attracted by a purchasing motive that, in some cases, is combined with an altruistic and involvement motive, which is the purely internal satisfaction derived from contributing to a worthy cause (altruistic) and the utility of having public recognition of a contribution (involvement) [33]. Additionally, some scholars suggest that contributors meet their human need for social affiliation by engaging in communities of like-minded members [34] and satisfy their desire of patronage because they are aware of their role in contributing to the success of a project [35]. Research related to behavioral finance has started to study the drivers of the investment in crowdfunding and has confirmed that altruism may play a role [36,37]. Thus, there is intrinsic and extrinsic motivation for rational herding in project funding [37].
Therefore, a major source of uncertainty relates to a campaign’s probability of success in terms of reaching the funding goal [38]. Additionally, the probability of success provides the main payoff or opportunity cost. Although reward-based crowdfunding platforms have handled massive amounts of funding, prospective backers are very often uncertain about an entrepreneur’s ability to attract enough contributions to fund the project. Projects on Kickstarter have raised approximately $4 billion from 16 million backers. However, 64.12% of the crowdfunding projects on Kickstarter have failed to reach their funding goals, as indicated on their website.
Although many factors can influence a campaign’s success, it has been reported that funds from early backers are often the only difference between a project reaching the funding goal or not [39]. Early contributions matter in two ways. First, this information signals the quality of the project to potential backers. This signal can in turn trigger social learning behavior [40], which is particularly important to stimulate sustainable models, and increase contributions from other potential backers [41]. Kuppuswamy and Bayus (2018), using a sample of 25,058 Kickstarter projects, showed that prospective backers usually make their pledging decisions based on how much of the project goal has already been funded by others [38]. Second, backers who have made an early contribution are likely to spread information about the project, which may attract additional contributions [42]. Both reasons indicate the importance of early backers’ contributions to a campaign’s success.
In this sense, our experiment is designed to measure the causal effect of some previous backers’ choices and recommendations on the behavior of posterior choices. Therefore, our paper is related to works that build on the insights from observational learning and other social influence research. Empirically, previous research has shown robust evidence of herding behavior in lending-based platforms [10,11,15,16,17,18]. Specifically, Astebro et al. (2018) show that the size and likelihood of a pledge is affected by the size and by the time elapsed since the most recent pledge, and Chan et al. (2019), also incorporating signals such as videos and entrepreneurs’ passion, propose a U-shaped relationship between prior funding and subsequent contributions—a relationship that is negative when funding amounts are small and positive when prior funding amounts are large. Our controlled experiment has not been designed to analyze different sizes and timings of previous backers’ choices.
In addition, several studies have run randomized controlled experiments where a treatment group receives an early donation or recommendations and the control group does not [19,20,21,22,23,24,25,43,44]. Although most of them have found that early contributions and positive opinions explain significant increments in subsequent contributions, not all of them have reached this result. Koning and Model (2013) and Zaggl and Block (2019) both found that projects to which they made a small initial contribution (e.g., $5) significantly decreased the probability of success. Our results, however, are in line with the general evidence of herding behavior in reward-based platforms.
Herding occurs when the observed behavior of others is used to inform one’s decision, mimicking this behavior. Herding can help improve the decision of the imitating individual [13]. The theoretical explanation for herding behavior is that the observed behavior reveals information that would not otherwise be available to the decision maker. Therefore, the uncertainty is reduced. Herding is considered rational when the observed behavior of others is used appropriately and improves decision making [13,14,45,46,47].
It can also be irrational, that is, when observed behavior reduces decision quality, for example when the information is overestimated or information cascades filter behaviors that lead to suboptimal decisions. For example, rational investors with similar stock preferences adopt the same response to similar information about company characteristics and fundamentals. When the herding of investors is rational in response to new information, herding moves prices toward the fundamental value of assets; price movement is not likely to reverse. By contrast, irrational herding occurs when investors with insufficient information and inadequate risk evaluation disregard their prior beliefs and blindly follow other investors’ actions. Non-information based herding might lead to market inefficiencies, drive asset prices away from fundamental values, and cause asset mispricing. Thus, our work also relates to the theoretical literature on rational herding [13,14,45,46,47], rational herding in financial markets [48,49,50] and rational herding in crowdfunding [51].
However, none of these models was fit to guide the detection of rational herding in our experiment. We develop a model that rationalizes herding behavior in crowdfunding. Our model deals with the backers’ strategic uncertainty and models backers’ beliefs as a random variable, whose mean is updated over time, by Bayesian methods, as new information is announced. Thus, we analyze the backers’ best response to the platform actions, which gives a theoretical support to our experimental results. Information-based actions need time to update and review choices. In general, it is not easy to precisely distinguish rational herding from irrational herding. In empirical papers one can analyze times series data and observe the evolution across time (for example, [48,49,50]), but with two periods we could only rely on the two observed distributions of choices and behavioral learning. In spite of these difficulties we set expectations about when we should observe rational herding and when we should not.
Chakraborty and Swinney consider an entrepreneur designing a fixed funding reward-based crowdfunding campaign for an innovative product [52]. Product quality is known to the entrepreneur but unknown to some backers. They employ a game theoretic model of signaling between an entrepreneur and campaign backers. They study how the entrepreneur can signal quality to backers via the design of the crowdfunding campaign, including the price of the reward and the funding target. The signals in our model, either the number of contributors to the different projects or the opinions posted on the platform by peers and experts, are different, and can be seen as signals of social approval and trustworthiness.
Miglo and Miglo (2019), in turn, consider entrepreneurial moral hazard related to the entrepreneur’s equity stake in the project, while his individual effort is costly, and this cost is not shared [31]. The crucial aspect here is the update of the market’s beliefs. The authors consider either the normal Bayesian rule, or that a lot of information becomes available regarding the product’s quality as a result of market participants’ interactions with each other and with the firm, and then the extent of asymmetric information is reduced. In contrast, we isolate the analysis of market beliefs and analyze their influence on backers’ choices. We do not model the platform’s best response to the backers’ beliefs, which leaves moral hazard problems out of the scope of our study.

3. Research method

3.1. The Theoretical Model

In this subsection we offer a theoretical model that rationalizes herding behavior by backers in a reward-based crowdfunding platform.
Consider a crowdfunding platform where two similar projects have been launched. There are two different scenarios (or treatments). The first one offers basic information about the projects (e.g., their characteristics and the funding goal, the campaign duration); the second offers more detailed information (e.g., the money already pledged by early backers, the number of backers that have already chosen a given project, or the opinions of other backers and experts). A finite set of backers decide which project to fund. Assume that the campaign lasts for two periods and that backers make decisions in each of them. The first scenario is denoted t = 1 and, accordingly, the second is t = 2. Thus, at   t   =   1 , backers only know that there are two similar projects with the same funding goal. However, at time 1 < t < 2, the crowdfunding platform announces some new information to the backers, who again make decisions at t = 2. This two-period dynamic model will allow us to capture the changes in choices, if any, from t = 1 to t = 2, where new information has been added.
Each backer must decide which of two comparable projects to fund. A project is deemed successful if it achieves its funding goal, which is the same for both projects. Rational backers make decisions that maximize their utility given their knowledge and conjectures regarding other agents’ decisions. Backers have a well-defined Bernoulli utility function (or preferences) for projects u A or u B , and they make decisions under uncertainty. A major source of uncertainty is the probability of success, namely whether the campaign will reach its funding goal and be financed. To know the probability of success of a project would imply knowing the other backers’ choices. Therefore, because initially there is neither information about, nor coordination among, backers, they must assign a prior probability of the likelihood of success of the projects to solve their decision problem under strategic uncertainty.
Given the underlying uncertainty, rational backers maximize their project’s expected utility. Thus, letting A and B denote the two projects, each backer interprets them as uncertain prospects (or lotteries) with an assigned probability of success. Let p denote the probability of success of project A . Therefore, at time t   =   1 , each backer compares the expected utility of the two projects and chooses the one with the highest expected utility: Project A will be chosen by a backer if and only if E [ u ( A | p ) ] E [ u ( B | 1 p ] ; similarly, project B will be chosen by a backer if and only if E [ u ( B | 1 p   E u ( A | p ) ] . Recall that at t   =   1 , there is no information on the aggregate amount already pledged to the projects or on the number of backers supporting them. Therefore, potential backers must make subjective conjectures about p .
At 1 < t < 2, certain specific pieces of information are added. As in the first situation, backers update their beliefs about the probability of success, say p ^   =   p / i n f o r m a t i o n , and choose the project with the highest expected utility given this information. Thus, a backer prefers A to B, whenever:
E [ u ( A | p ^ ) ]     E [ u ( B   | 1 p ^ ]
This information refers, for example, to the number of backers having already chosen A or B, that the platform announces at 1< t < 2. Choices are made at t = 2. Now, depending on the new probability of success of A, a backer may change the choice made in a situation without extensive information (Scenario 1) to a new choice in a situation with more information (Scenario 2). The model is then a two-stage dynamic decision model, where backers depart from a situation of no information, update their beliefs, p , by Bayes’ rule, obtain new beliefs, p ^ ,   and make choice decisions at t = 2.

3.2. Dealing with Strategic Uncertainty

An important piece of the analysis is to model backers’ beliefs about the probability of success of the projects. Consider the prior probability distribution of the probability of success of project A . A realistic assumption is to assume that, given the strategic uncertainty about other backers’ decision, each backer has no information about this distribution. Therefore, an appropriate way to model this distribution is to assume that p is a random variable with a given distribution. Backers have no information about this distribution.
We may then assume that it is common knowledge that p follows a beta distribution: p ~ B e t a α , β . The beta distribution is a family of continuous probability distributions defined on the interval [0, 1] and parametrized by two positive shape parameters, α and β , which appear as exponents of the random variable and control the shape of the distribution (a special case is when α = β = 1, which coincides with the uniform distribution on [0, 1]). This distribution represents a family of probabilities and is a versatile way to represent outcomes for percentages or proportions. Beta distributions can be understood as representing probability distributions of probabilities; in other words, they represent all possible values of a probability when these values are unknown. The expected value, or mean ( μ ), of a beta distribution random variable p with two parameters α and β is a function of only the ratio, α / β , of these parameters.
μ p   =   α α + β    
variance   V p   =   α β α + β + 1 α + β 2    
Hence, backers will choose project whenever   E u A | p   =   u A E p   u B E 1 p   =   E u ( B | 1 p ] , and the other way around.
To further reflect the above mentioned strategic uncertainty, suppose that α   =   β   =   1 2 .   Thus ,   μ p   =   1 2 , and E [ u A | p   =   u A E p   =   u A × 1 / 2 and E [ u B | p   =   u B E 1 p   =   u B × 1 / 2 .
When new information is released at time t, backers update their common prior distribution by Bayes’ rule. Thus, let the new information consist of informing about the number of backers that have already chosen project A and B at time t. Then, the posterior distribution of the probability of success (i.e., the new distribution of p conditional on this information) is distribution
p ^ ~ B e t a α + S , β + T , with mean
μ p ^   =   α + S α + β + S + T
With α   =   β   =   1 2 , the above equation translates to
μ p ^   =   1 + 2 S 2 1 + S + T
And, consequently, backers will choose project A whenever:
E [ u A | p ^   =   u A ×   1 + 2 S 2 1 + S + T u B × 1 + 2 T 2 1 + S + T

3.3. The Impact of New Information

Backers do not see the real choices of projects over the two periods of time, but we, as theoreticians, do. A way to observe the impact of information on beliefs, and hence on choices, is to compare the posterior distributions once the choices have been made, before and after the information release. Then, suppose that we see the project choices with prior probability distribution p , and that R denotes the observed choices of project A and W those of B . As seen above, the posterior of p conditional to the observed choices is then distribution p ~ B e t a α + R , β + W   =   B e t a 1 2 + R , 1 2 + W , with mean μ p   =   1 + 2 R 2 1 + R + W .
Now, after the first choice in time t   =   1 , some information is released at time t ,   and then backers update their prior probability p and get p ^ ~ B e t a α + S , β + T , with mean μ p ^   =   1 + 2 S 2 1 + S + T , as computed above.
Then, again, choices are made in t   =   2 . The theorist observes these choices and updates p ^ ,   her   prior   distribution   now ,   conditional to the observed choices in t   =   2 . Suppose that K choices of project A and M choices of project B have been observed. The posterior distribution of p ^ is p ^ ~ B e t a α + S + K , β + T + M with mean μ ( p ^ )   =   1 + 2 S + K 2 1 + S + T + K + M .
From a theoretical point of view, the impact of information on choice behavior comes from the analysis of the two posterior distributions p and p ^ . Namely, by comparing their means and observing whether there has been a shift to the right or to the left, that is, which have been the backers’ best responses to the information release. Recalling that p denotes the probability distribution of the success of project A , a shift to the right leads to a positive impact on the number of backers choosing project A . Furthermore, whenever the distribution variances are the same, these shifts mean that a distribution dominates the other distribution in the sense of first order stochastic dominance.
We next summarize the timing of the two-period model and the corresponding updating.

3.4. Timing Summary

Agents: a crowdfunding platform and a finite set of homogeneous backers.
t = 1: The platform launches two projects with basic information and backers make a choice between projects A and B. The choice depends on their utility value (or preferences) for the projects and their prior probability of success of them. The prior distribution of the probability of success of project A is denoted by p .
Choices made by backers give rise to a posterior probability distribution of success of project A, conditional to the choices in t = 1, p   =   p o b s e r v e d   c h o i c e s 1 , which is not observed by backers.
1 < t < 2: The platform adds new information on its web page. This information refers to the number of backers that have already chosen projects A and B at t.
With this information and following Bayes’ rule, backers update their beliefs about the probability of success p, obtaining p ^   =   p i n f o r m a t i o n
t = 2: Backers choose again between projects A and B, according to distribution   p ^ .
Choices made by backers give rise to a new updating of distribution p ^   (now the new prior probability distribution), conditional to the choices in t = 2, yielding the posterior probability distribution of success p ^   =   p o b s e r v e d   c h o i c e s 2 , which is not observed by backers.
The following figure illustrates the different Bayesian updating,
PlatformBackersPlatformBackers
A, BChoicesRelease of partial informationChoices
t = 1tt = 2
p Theoretical updating of p Backers’ updatingTheoretical updating of   p ^
p   =   p | o b s e r v e d   c h o i c e s 1 ; p ^   =   p | i n f o r m a t i o n ; p ^   =   p o b s e r v e d   c h o i c e s 2
This model will help us rationalize backers choices and to ascertain whether the herding is rational.

3.5. Experimental Design and Procedures

Amazon Mechanical Turk (MTurk) is generally well suited to economic and psychological experiments because it provides instant access to a large and culturally diverse subject pool [53]. For experiments on subject behavior in the online sharing economy, such as the present experiment, MTurk is especially appropriate because the subjects in the pool are familiar with online platforms and culture, simulating crowdfunding backers. These advantages outweigh the loss in control of attentiveness while making decisions with respect to laboratory-based economic experiments. Moreover, experimental comparisons between attentiveness by undergraduates and MTurk subjects [54,55,56] generally validate MTurk’s suitability for data collection by confirming that classical heuristics, biases, and levels of attentiveness to directions are comparable to those in traditional subject pools. Furthermore, it has been shown that, for samples with only highly reputed MTurk subjects (HIT approval rate > 95%), such as those in the present experiment, data quality is higher in terms of the attention check questions (ACQs) [57].
We sought to explore the decisions of crowdfunders (backers) when dealing with new information in online crowdfunding markets. We also sought to examine possible gender and cultural effects. Therefore, we replicated a reward-based crowdfunding webpage and ran an economic experiment with 847 MTurk users from the United States and India (500 men; 347 women) (Subjects from USA and India can easily get the economic rewards from the MTurk platform. However, subjects from other countries, as those in the EU, for example, are not allowed to get paid in cash).
Specifically, experimental subjects had to make choices in two situations. In the first one, the decision concerned the choice between two travel books, and in the second, the choice was between two cookery books. Table 1 and Table 2 provide more details of the experiment, available in the Supplementary Materials, offers the screenshots and instructions.
This study experimentally tested subjects’ decisions in two scenarios (hereinafter treatments), following the method presented in the previous section.
As shown in Table 1, the Amazon Mechanical Turk (MTurk) economic experiment began by presenting Situation 1, which explores how information about money already pledged by early backers affects backers’ beliefs. Situation 2, presented in Table 2, in contrast, explores how information about other backers’ and experts’ opinions affects beliefs. Each situation was presented in two treatments. Additional information was revealed in Treatment 2.
Subjects started the experiment with an initial endowment of $60 each and were asked to contribute $15 to one of two projects aiming to publish a book in each treatment of each situation (Book A or Book B in Situation 1 and Book C or Book D in Situation 2). All projects had the same funding goal and deadline. A book was successful if 70% or more participants chose to finance that book. Subjects received a show-up fee of $0.50 plus a bonus of $0.15 per successful project chosen. The subjects from India received different payoffs in line with purchasing power parity. They received a show-up fee of $0.25, and the bonus per successful project was $0.07.
Treatment 1 of Situation 1 asked participants to contribute $15 to one of two travel book projects (Book A or Book B) based on the book cover. Later, in Treatment 2, information was released stating that Book A had already been financed by 35 backers (which meant 10% of backers needed for success, with 315 potential backers left) and that Book B had been financed by only four backers (which meant 1.14% of backers needed for success, with 345 potential backers left). Participants were asked to make their second choice and contribute another $15 to one of the two travel book projects (Book A or Book B).
As shown in Table 2, Situation 2, in contrast, explores how information about other backers’ and experts‘ opinions affects beliefs. Similarly, Treatment 1 of Situation 2 asked participants to contribute $15 to one of the two projects (Book C or Book D) based on the book cover. Later, in Treatment 2, investors were shown three opinions per book. Book C had two negative comments from previous backers and one positive comment from an expert. Conversely, Book D had positive recommendations from two previous backers and a negative recommendation from one expert. It was also revealed that both projects had raised $450 from 30 backers. As contributions from early investors were identical for both books, the only difference lay in the opinions: peer opinion was expected to act as a proxy for other participants’ choices, as in Huang and Chen’s [13] analysis of buyer behavior in online product choice.
At the end of the experiment, subjects answered five demographic questions on education level, number of children, household income, employment status, and age. These responses were used as control variables. The experiment was launched in January 2019 through Amazon MTurk and was sent to 1000 subjects. These subjects had an approval rate of more than 95% from previous requesters, meaning that 95% of previous employers had been satisfied with workers’ performance. Of these subjects, 500 were located in the United States and 500 were located in India. The aim was to recruit 250 women and 250 men in each country. However, we could recruit only 97 women in India within the time limit. Thus, 847 subjects participated in the experiment: 250 women and 250 men from the United States and 97 women and 250 men from India. Most had high school diplomas or higher education (63.60% from the United States and 96.82% from India).
The experiment launched in the United States had two successful projects, Book A in Situation 1 Treatment 2, and Book D in Situation 2 Treatment 2. However, the replication in India had no successful projects. Similarly, Treatment 1 had no successful projects, given that no additional information was shown in this treatment. Thus, subjects did not overwhelmingly choose any one project. The average payment, including the show-up fee and bonus, was $0.697 in the United States and $0.25 in India. Subjects received no feedback until the end of the experiment, when all choices had been made.

3.6. Hypothesis

Our first hypothesis is that early contributions affect backers’ beliefs about the campaign’s probability of success, thereby increasing contributions to the campaign, that is,
Hypothesis 1 (H1).
There is herding behavior.
The second hypothesis is more elaborated: we contrast the hypothesis that the above herding is rational against irrational herding.
Hypothesis 2 (H2).
Herding is rational.
Hypothesis 3 (H3).
Herding is irrational.
Finally, our last hypothesis is that positive opinions of peers, as shown nowadays in online social networks, are more important than experts’ comments in increasing campaign contributions.
Hypothesis 4 (H4).
Positive opinions of peers about a project are more important than those of experts.

4. Results

4.1. Descriptive Overview

Figure 1 illustrates the MTurk subjects’ choices of which project (book) to fund in each situation. As expected, under the theoretical model described earlier, subjects significantly changed their crowdfunding choices once new information had been released (from Treatment 1 to Treatment 2) in both situations. Information on early investors was released in Treatment 2 of Situation 1: Book A had already been financed by 10% of the backers needed to reach the funding goal, and Book B had already been financed by 1.14% of the backers needed. As shown in Figure 1, 62.7% of subjects choose to finance Book A after receiving this information. Before receiving this information, only 35.4% of subjects had chosen to finance Book A. Thus, subjects changed their beliefs about the probability of success of Book A and Book B and chose the project with highest expected utility given this information. The difference of 8.86 percentage points in early backers (10% for Book A—1.14% for Book B) acted as a proxy of project success. Thus, subjects chose Book A to fund a project with a higher probability of success.
Interestingly, in Situation 2, previous backers’ positive opinions acted as a proxy for the project’s probability of success. Book C had two negative comments from previous backers and one positive comment from an expert. Conversely, Book D had positive recommendations from two previous backers and a negative recommendation from one expert. As Figure 1 shows, subjects significantly reduced their choices of Book C (from 59.4% to 39.3%) and predominantly chose to finance Book D, which had positive peer comments (despite a negative expert review). This information changed subjects’ funding choices, as predicted by the theoretical model described earlier, because of a change in backers’ beliefs about the projects’ probability of success.
Table 3 and Table 4 present more detailed experimental results, breaking down subjects’ choices by country and gender. Table 3 provides a detailed overview of the results in Situation 1. The increase in funds pledged to Book A was significant among both male and female backers from the United States and among men from India. Information on early investors released in Treatment 2 had a significant effect on these three groups. The group of women from India also increased funding for Book A. However, this increase was not significant, probably because of the small number of subjects in this group (only 97 women from India versus 250 subjects in each of the other three groups). Panel B of Table 3 shows the subjects’ choices in Treatment 2, while Panel A shows the changes in choices. The expression “A/B” denotes subjects’ switching from funding Book A in Treatment 1 (without information) to funding Book B in Treatment 2 (with information), and “B/A” denotes subjects’ switching from funding Book B in Treatment 1 (without information) to funding Book A in Treatment 2 (with information). All groups (men from the United States, women from the United States, men from India, and women from India) primarily changed from funding Book B in Treatment 1 to funding Book A in Treatment 2, once information on early backers had been released. Therefore, no significant gender differences were present in this subject pool.
Table 4 presents detailed results for Situation 2. As shown in Panel B (subject’s choices in Treatment 2), the increase in funds pledged to Book D was significant among both male and female backers from the United States. However, men and women from India did not significantly increase their funding of Book D once information on peers’ and experts’ opinions had been released in Treatment 2. Panel A shows the changes in choices. The expression “C/D” denotes subjects’ switching from funding Book C in Treatment 1 (without information) to funding Book D in Treatment 2 (with information), and “D/C” denotes subjects’ switching from funding Book D in Treatment 1 (without information) to funding Book C in Treatment 2 (with information).
Whenever changes in choices were made, they were overwhelmingly in the direction of increasing funding for Book D in all groups (men from the United States, women from the United States, men from India, and women from India), once information on peers’ and experts’ recommendations had been released. Specifically, 101 men changed from funding Book C to funding Book D (only 29 men changed in the opposite direction), while 111 women changed from funding Book C to funding Book D (only 13 women changed in the opposite direction). Note that Book C had received two negative opinions from previous backers and one positive expert opinion, whereas Book D had positive recommendations from two previous backers and a negative recommendation from one expert. Again, no significant gender differences were found in this subject pool.
The results of our first econometric tests show that early contributions to a project increase successive contributions to the project, and that positive peer reviews of a project are more important than the negative opinions of experts on the same project. Therefore:
Result 1.
H1 is accepted: There is herding behavior by backers.
Result 2.
H4 is accepted: Positive peer reviews of a project are more important than the negative opinions of experts on the same project.
These findings are in line with main empirical and randomized experiments literature [19,20,37,38,39,40].

4.2. Analysis of the Aggregate Results

In this section we analyze and compare the choices made by backers in our experiment in t = 1 and in t = 2, and check whether our experimental results exhibit rational herding. This comparison also allows us to check the robustness of the results. We start with situation 1 and first present the statistical analysis of the experimental data with the McNemar test, because the data are dichotomous variables. The test tells us if there is a statistically significant change in the probability distribution of the choice.
The McNemar test is a non-parametric statistical test. It is applied to two dichotomous variables to test changes in answers using the chi-squared distribution with one degree of freedom. The purpose is to compare the change in the proportion distribution between two measurements of a dichotomous variable and determine whether this difference is not random. A value of p   <   0.05 provides sufficient evidence to reject the null hypothesis in favor of the alternative hypothesis that the marginal proportions are significantly different from each other. An interesting observation when interpreting the McNemar test is that the elements on the main diagonal do not contribute to decisions about whether the pre- or post-experimental condition is more favorable.
Table 5 shows the 2 × 2 table, together with the marginal probabilities, for the McNemar test for Situation 1. The McNemar test for Situation 1 gives a significance of p   <   0.0001 , for 847 valid cases. Because p   <   0.05 , the test provides sufficient evidence that new information about early backers released in Treatment 2 changes the distribution of subjects’ choices.
As shown in Table 5, in Treatment 1, 300 backers (35.4%) chose to finance Book A, and 547 (64.58%) chose to finance Book B. However, once information about early backers had been released in Treatment 2, the distribution of choices changed substantially: 531 backers (62.7%) chose to finance Book A, and only 316 (37.3%) chose to finance Book B. This change was due to the transfer of 280 backers (33.06%) from Book B to Book A. Those who formerly chose to finance Book A kept this choice in Treatment 2 (only 49 backers changed from Book A to Book B).

4.3. Rationalizing Backers´ Behavior

What is the reason for this change in the distribution of choices? We argue that, after the information about early backers had been released, backers updated their beliefs about the projects’ probability of success and maximized their expected utility given these new beliefs.
To model backers’ beliefs and the updating of these beliefs, we followed the theoretical framework described earlier, assuming that backers’ beliefs about the probability of success, p , of Book A followed a beta distribution.
To capture the no-information choice, we assumed that α   =   β   =   1 2 , so p ~ B e t a 1 2 , 1 2 , with mean μ p   =   1 2 (see Equation (2)) and variance V p   =   1 8 (see Equation (3)). With these beliefs, backers maximized their expected utility and chose a project, with 64.58% choosing Book B.
In Treatment 2, the subjects discovered that 35 backers (10% of the necessary backers) had already funded Book A, while only four backers (1.14% of the necessary backers) had funded Book B. This information revealed a difference of 8.86% percentage points in early backers. Note that the backers did not know the distribution of the initial choices. Therefore, the Bayesian updating of beliefs, p , by backers gave them a prior distribution for the new situation,   p ^   =   p | i n f o r m a t i o n , distributed as follows (see Equation (5)):
p ^   =   p | i n f o r m a t i o n ~ B e t a 35 + 1 2 ,   4 + 1 2
with mean μ p ^   =   35 + 1 2 40   =   0.887 , and variance V p ^   =   0.0024 .
Accordingly, in Treatment 2, backers assigned a mean probability of 88.7% to the success of Book A. With these beliefs, they maximized their expected utility and newly chose a project, resulting in 62.7% of them choosing Book A.
Analytically, the impact of the information on backers’ choices is explained by comparing the two posterior distributions of choices without information (Treatment 1) and with information (Treatment 2). A shift to the right would lead to a positive impact on the number of backers choosing project A.
In Treatment 1 the posterior distribution of the probability of success of Book A , (taking the data from Table 5) p , is:
p   =   p | c h o i c e s 1 ~ B e t a 300 + 1 2 ,   547 + 1 2
with mean μ p   =   300 + 1 / 2 847 + 1   =   0.354 , and variance V ( p )   =   0.00027 .
This distribution is centered on 35.4% of backers choosing Book A. After the release of information about early backers (Treatment 2) by the crowdfunding platform, the new choices have the following posterior distribution of the probability of success of Book A (recall that the prior distribution to update is now p ^ , and that the data are again from Table 5):
p ^     =   p | i n f o r m a t i o n ,   c h o i c e s 2 ~ B e t a 531 + 35 + 1 2 ,   316 + 4 + 1 2  
with mean μ ( p ^   ) =   566 + 1 2 847 + 40   =   0.639 ; and variance V ( p ^   )   =   0.00025 .
This posterior distribution is now centered on 63.9% of backers choosing Book A once the information about early backers has been released.
Comparing the means of the two updated posterior distributions, we observe that μ ( p ^   )   =   0.639 > 0.354   =   μ p , that is, distribution p ^   is a shift to the right to distribution p . Given that these distributions are the updating of the distribution of the probability of success of project A, a shift to the right leads to a positive impact on the number of backers choosing project A. Furthermore, given that the two distribution variances are basically the same, we could say that distribution p ^   first-order stochastically dominates distribution p .
This theoretical positive impact on the number of backers choosing project A is corroborated by the one statistically obtained in Table 5. Therefore, our theoretical model explains the impact of new information on (experimental) backers’ choices.
Thus, information on the difference of 8.86 percentage points in early backers in favor of Book A caused the former distribution to shift to the right, leading to a positive impact on the number of backers choosing Book A. This change was due to the updated beliefs on the probability of Book A’s success, which increased backers’ expected utility of choosing Book A over Book B. Therefore, the so-called herding behavior of crowdfunding backers can be rationalized by assuming rational backers who follow the Bayesian updating of beliefs and expected utility theory.
Rational Herding:
In order to accept the hypothesis of rational herding (H2) against that of irrational herding (H3), we have to set up when we may expect rational herding in crowdfunding campaigns. We depart from the distribution of initial choices of project A in t = 1, that is centered on the 35.4% of backers choosing Book A. Rational herding is information-based, while irrational herding implies that backers mimic others when choosing a project.
Information-based actions need time to update and review choices. In general, it is not easy to precisely distinguish rational herding from irrational herding. In empirical papers one can analyze times series data, and observe the evolution across time (for example, [48,49,50]), but with two periods we could only rely on the two observed distributions of choices and behavioral learning.
Therefore, our hypothesis for rational herding is that we expect, after information release, an increase in the mean distribution of choices between 25% and the 50%. Roughly speaking, the hypothesis implies that the number of backers choosing book A after information release has to belong to the interval (513, 723). As an example, the lower bound could be achieved if 80% of the backers initially choosing book A (300) would choose A again after information release, and 50% of the initial backers choosing B (547) would later choose A instead. Similarly, the upper bound would be achieved if, say, 95% of the backers choosing initially book A would choose A again, and 80% of the backers initially choosing B would change to A. Therefore, if we observe less than 513 backers choosing A or more than 723 choosing A, after receiving information, we will conclude that herding is irrational.
The rationale for this hypothesis is that an increase of less than 25% would imply that backers do not pay attention to information signals, and an increase bigger than 50% would mean that backers blindly follow the crew. These increments in the mean distribution of backers choosing A would give us a mean of A choices confidence interval at 95%, Ih (95%), equal to
Ih (95%) = (60,4%, 85,4%)
Now, consider the posterior distribution of choices in t = 2. The calculation of the mean of A choices confidence interval at 95%, I (95%), gives us:
I   ( 95 % )   =   ( 60 , 6 % ,   67 % )     ( 60 , 4 % ,   85 , 4 % )   =   Ih   ( 95 % )
As we see above, this interval is contained within the hypothesized interval (60,4%, 85,4%). Therefore, we claim that herding is (Bayesian) rational. Note, moreover, that the mean of the distribution in the MacNemar test gives us 62.7% that also belongs to Ih (95%).
Result 3.
H2 is accepted: Herding is rational.
Result 4.
H3 is rejected: Herding is irrational.
Afterwards, we repeated the analysis for Situation 2. Here, the information released concerns the positive or negative opinions of peers and those of professionals. As above, we show first the statistical analysis of the data from the experiment, in Table 6.
Table 6 presents the results of the McNemar test for the choices in Situation 2. The McNemar test had a significance of p   <   0.0001 for 847 valid cases. Because p   <   0.05 , the test provides sufficient evidence that information on peer recommendations changes the distribution of choices.
As shown in Table 6, 503 backers in Treatment 1 (59.38%) chose to finance Book C, and 344 (40.62%) chose to finance Book D. However, once information about peers’ (and experts’) recommendations had been released in Treatment 2, the distribution of choices changed: only 333 (39.32%) backers chose to finance Book C, and 514 (60.68%) chose Book D. This change was due to the transfer of 212 backers (25.03%) from Book C to Book D. Those who formerly chose to finance Book D kept this choice in Treatment 2: only 42 backers, 4.96%, changed from Book D to Book C.
As in Situation 1, suppose that the backers’ beliefs, p, followed a prior distribution of p ~ B e t a 1 2 , 1 2 , with mean μ p   =   1 2 and variance V p   =   1 8 . With these beliefs, backers maximized their project’s expected utility and chose one of them.
The posterior distribution of choices in Treatment 1 was p   =   p | c h o i c e s 1 ~ B e t a 503 + 1 2 ,   344 + 1 2 , with mean μ p   =   0.59 , or a distribution centered on 59% of backers choosing Book C, with a variance of p   =   0.00028 . As in Situation 1, this distribution was not observed by the backers.
Later, in Treatment 2, information about peer opinion was released. Specifically, backers discovered that Book C had received two negative opinions from buyers (backers) and one positive opinion from an expert, while Book D had received two positive opinions from buyers and a negative opinion from an expert. Modeling the updated p for backers based on this information would have been difficult, although they found a way to do it. We did not follow this approach; instead, we observed backers’ choices in Treatment 1 and Treatment 2 and compared their corresponding posterior distributions. The posterior distribution for choices in Treatment 2, where only 39% of the backers chose Book C, was:
p ^     =   p | c h o i c e s 2 ~ B e t a 333 + 1 2 ,   514 + 1 2  
with mean μ ( p ^   )   =   0.39 , and variance of V ( p ^   )   =   0.00028 .
Now, the comparison of the means of the two posterior distributions shows that μ ( p ^   )   =   0.39 <   0.59   =   μ p , meaning a shift to the left of distribution   p and leading to a negative impact on the number of backers choosing project C. Thus, here, distribution p first-order statistically dominates distribution p ^   . This result coincides with the statistical analysis of Table 6.
Given the shift to the left of the backers’ conditional prior distribution of Book C’s probability of success, the results show that the two positive opinions offered by buyers (backers or peers) outweighed the negative expert opinion. Book C, which had a reduced posterior probability of success, had received one positive expert opinion and two negative opinions from buyers. In other words, peer opinion (i.e., the opinions of buyers) was more important to update backers’ beliefs and change backers’ beliefs about the project’s probability of success and the distribution of choices.
The comparison of the McNemar test and the theoretical model allows us to check the robustness of the results. Additionally, it suggests that rational herding occurred in the experimental setting, as the experimental results are supported by the theoretical model. Revealing information influences backers’ beliefs regarding projects’ probability of success and, consequently, alters backers’ choices, following a model of rational herding:
Result 5.
The theoretical model and McNemar test support H4: positive peer reviews of a project are more important than the negative opinions of experts on the same project.

5. Discussion and Conclusions

Crowdfunding has attracted much attention in recent years as a fast-growing way of financing environmental and social sustainable initiatives. Given that most crowdfunding campaigns do not succeed in securing funds and that herding is widespread among crowdfunding backers, understanding the mechanisms behind herding is of immense importance to design management strategies to succeed in funding social sustainable enterprises.
Specifically, understanding the degree of rational herding in crowdfunding helps avoid underestimating herding by ignoring powerful rational drivers that contribute to irrational herding behavior. Rational herding requires observers to make unbiased inferences from the decisions they observe.
In order to overcome the limitations that empirical research faces regarding control and causality, we have conducted an ad hoc controlled online economic experiment in MTurk where 847 subjects were rewarded, depending on their decisions and those of others, with information and payoff externalities, as occurs in crowdfunding markets. Mturk allowed us to get a diverse subject pool, including residents in the United States and in India, and an adjusted proportion of men and women, instead of the usual undergraduate population.
The experiment was designed to test and quantify the causal effects of revealing specific information to prospective backers: (i) the number of early contributors already financing the project and (ii) positive opinions of other backers versus those of experts.
The results of the controlled economic experiment showed a causal relationship between changes in the revealed information and the subjects’ observable decisions. We also developed a model that captures the effect of rational herding. The effect of rational herding was captured in the experimental setting and explained by the model: (i) early contributions to the campaign affected backers’ beliefs about the probability of funding success, thereby influencing choices to boost campaign contributions; a difference of 8.86 percentage points in early backers was enough to act as a proxy for project success; and (ii) positive opinions of peers were more important than expert opinions in increasing campaign contributions, acting as a proxy for subsequent contributions. No significant gender differences were found in those decisions.
Thus, the experimental results were supported by the theoretical model on rational herding: revealing information influenced backers’ beliefs regarding projects’ probability of success and, consequently, altered backers’ choices.
Our study contributes to the literature by showing causal relationships and explaining with the model of strategic uncertainty that rational herding behavior can be a powerful driver of herding behavior. Changes in investors’ choices observed in crowdfunding markets may be due to an adjustment in rational beliefs about the campaign’s probability of success.
This research thus offers managers of innovative and environmental and social sustainable initiatives tools to succeed in crowdfunding: managers may increase backers’ beliefs about the projects’ probability of success by increasing early contributors and by getting positive opinions of other backers. For example, fundraisers might consider making an initial investment in their own projects, given the reluctance of crowdfunding backers to commit in the early days of a campaign, due to the high information asymmetry and uncertainty about the chances of success in reaching the funding goal.
A limitation of our model is that we only allow backers to choose one of two projects and not both of them. This configuration simplifies the experiment and allows us to isolate the decision making that we seek to analyze. Another limitation refers to fixed contributions and short periods of time, so that our results cannot be related to other empirical findings such as the U-shape of the backer’s responses found in some empirical literature.
Besides the implications this research offers, future research may try to overcome the described limitations and verify our findings using other geographic contexts and subject pools. A more ambitious project would be to extend the model to several periods to see whether the U-shape form of backers’ responses, found in some empirical papers, is corroborated.

Supplementary Materials

The following are available online at https://www.mdpi.com/2071-1050/12/23/9827/s1, Screenshots and Instructions of the Online Experiment.

Author Contributions

Conceptualization, I.C., E.M.-V., P.S.-P., and A.U.; methodology, I.C., P.S.-P., and A.U.; validation, I.C., P.S.-P, and A.U.; formal analysis, E.M.-V.; investigation, I.C., E.M.-V, P.S.-P, and A.U.; writing—original draft preparation, I.C., P.S.-P, and A.U.; writing—review and editing, I.C., P.S.-P, and A.U.; funding acquisition, A.U.; Supplementary Materials, I.C., E.M.-V., P.S.-P., and A.U. All authors have read and agreed to the published version of the manuscript.

Funding

Amparo Urbano and Irene Comeig acknowledge financial support from the Spanish Ministry of Economics and Competition and the European Feder Funds under project ECO2016-75575-R, the Spanish Ministry of Science, Innovation and Universities under project PID2019-110790RB-I00, and the “Generalitat Valenciana” under the Excellence Program PROMETEO 2019/095. The authors thank Iván Arribas for his statistical advice.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Calic, G.; Mosakowski, E. Kicking Off Social Entrepreneurship: How a Sustainability Orientation Influences Crowdfunding Success. J. Manag. Stud. 2016, 53, 738–767. [Google Scholar] [CrossRef]
  2. Flórez-Parra, J.M.; Rubio Martín, G.; Rapallo Serrano, C. Corporate Social Responsibility and Crowdfunding: The Experience of the Colectual Platform in Empowering Economic and Sustainable Projects. Sustainability 2020, 12, 5251. [Google Scholar] [CrossRef]
  3. Horisch, J. ‘Think big’ or ‘small is beautiful’? An empirical analysis of characteristics and determinants of success of sustainable crowdfunding projects. Int. J. Entrep. Ventur. 2018, 10, 111–129. [Google Scholar] [CrossRef]
  4. Petruzzelli, A.M.; Natalicchio, A.; Panniello, U.; Roma, P. Understanding the crowdfunding phenomenon and its implications for sustainability. Tech. Forecast. Soc. Chang. 2019, 141, 138–148. [Google Scholar] [CrossRef]
  5. Ryoba, M.J.; Qu, S.; Ji, Y.; Qu, D. The Right Time for Crowd Communication during Campaigns for Sustainable Success of Crowdfunding: Evidence from Kickstarter Platform. Sustainability 2020, 12, 7642. [Google Scholar] [CrossRef]
  6. Mollick, E. The Dynamics of Crowdfunding: An Exploratory Study. J. Bus. Ventur. 2014, 29, 1–16. [Google Scholar] [CrossRef] [Green Version]
  7. Cosh, A.; Cumming, D.; Hughes, A. Outside Entrepreneurial Capital. Econ. J. 2009, 119, 1494–1533. [Google Scholar] [CrossRef]
  8. Gierczak, M.M.; Bretschneider, U.; Haas, P.; Blohm, I.; Leimeister, J.M. Crowdfunding: Outlining the New Era of Fundraising. In Crowdfunding in Europe. State of the Art in Theory and Practice; Springer: Cham, Switzerland, 2016; pp. 7–23. [Google Scholar]
  9. Leboeuf, G.; Schwienbacher, A. Crowdfunding as a New Financing Tool. In The Economics of Crowdfunding. Startups, Portals, and Investor Behavior; Palgrave Macmillan: Cham, Switzerland, 2018; pp. 11–28. [Google Scholar]
  10. Astebro, T.B.; Fernández, M.; Lovo, S.; Vulkan, N. Herding in Equity Crowdfunding Paris December 2018 Finance Meeting EUROFIDAI—AFFI. 2019. Available online: http://dx.doi.org/10.2139/ssrn.3084140 (accessed on 16 September 2020).
  11. Zhang, J.; Liu, P. Rational Herding in Microloan Markets. Manag. Sci. 2012, 58, 892–912. [Google Scholar] [CrossRef] [Green Version]
  12. Simonsohn, U.; Ariely, D. When Rational Sellers Face Nonrational Buyers: Evidence of Herding on eBay. Manag. Sci. 2008, 54, 1624–1637. [Google Scholar] [CrossRef] [Green Version]
  13. Banerjee, A.V. A Simple Model of Herd Behavior. Q. J. Econ. 1992, 107, 797–817. [Google Scholar] [CrossRef] [Green Version]
  14. Bikhchandani, S.; Hirshleifer, D.; Welch, I. A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades. J. Polit. Econ. 1992, 100, 992–1026. [Google Scholar] [CrossRef]
  15. Bi, S.; Liu, Z.; Usman, K. The influence of online information on investing decisions of reward-based crowdfunding. J. Bus. Res. 2017, 71, 10–18. [Google Scholar] [CrossRef]
  16. Chan, C.S.R.; Parhankangas, A.; Sahaym, A.; Oo, P. Bellwether and the herd? Unpacking the u-shaped relationship between prior funding and subsequent contributions in reward-based crowdfunding. J. Bus. Ventur. 2020, 35, 105934. [Google Scholar] [CrossRef]
  17. Kapounek, S.; Kučerová, Z. Overfunding and Signaling Effects of Herding Behavior in Crowdfunding, CESifo Working Paper No. 7973. 2019. Available online: https://ssrn.com/abstract = 3502020 (accessed on 22 September 2020).
  18. Kraus, S.; Richter, C.; Brem, A.; Cheng-Feng, C.; Man-Ling, C. Strategies for reward-based crowdfunding campaigns. J. Innov. Knowl. 2016, 1, 13–23. [Google Scholar] [CrossRef] [Green Version]
  19. Delfino, A.; Marengo, L.; Ploner, M. I did it your way. An experimental investigation of peer effects in investments choices. J. Econ. Psychol. 2016, 54, 113–123. [Google Scholar] [CrossRef] [Green Version]
  20. Drehmann, M.; Oechssler, J.; Roider, A. Herding with and without payoff externalities -An internet experiment. Int. J. Ind. Org. 2007, 25, 391–415. [Google Scholar] [CrossRef] [Green Version]
  21. Huang, J.H.; Chen, Y.F. Herding in Online Product Choice. Psychol. Mark. 2006, 23, 413–428. [Google Scholar] [CrossRef]
  22. Muchnik, L.; Aral, S.; Taylor, S.J. Social Influence Bias: A Randomized Experiment. Science 2013, 341, 647–651. [Google Scholar] [CrossRef] [Green Version]
  23. Sasaki, S. Majority size and conformity behavior in charitable giving: Field evidence from a donation-based crowdfunding platform in Japan. J. Econ. Psychol. 2019, 70, 36–51. [Google Scholar] [CrossRef]
  24. Van de Rijt, A.; Kang, S.M.; Restivo, M.; Patil, A. Field experiments of success-breeds-success dynamics. Proc. Natl. Acad. Sci. USA 2014, 111, 6934–6939. [Google Scholar] [CrossRef] [Green Version]
  25. Wessel, M.; Adam, M.; Benlian, A. The impact of sold-out early birds on option selection in reward-based crowdfunding. Decis. Support Syst. 2019, 117, 48–61. [Google Scholar] [CrossRef]
  26. Belleflamme, P.; Lambert, T.; Schwienbacher, A. Crowdfunding: Tapping the right crowd. J. Bus. Ventur. 2014, 29, 585–609. [Google Scholar] [CrossRef] [Green Version]
  27. Bayus, B.L. Crowdsourcing New Product Ideas Over Time: An Analysis of the Dell IdeaStorm Community. Manag. Sci. 2013, 59, 226–244. [Google Scholar] [CrossRef]
  28. Agrawal, A.; Catalini, C.; Goldfarb, A. Some simple economics of crowdfunding. Innov. Policy Econ. 2014, 14, 63–97. [Google Scholar] [CrossRef] [Green Version]
  29. Cumming, D.; Hornuf, L. Preface & Introduction. In The Economics of Crowdfunding. Startups, Portals, and Investor Behavior; Palgrave Macmillan: Cham, Switzerland, 2018; pp. 1–8. [Google Scholar]
  30. Mollick, E. Democratizing Innovation and Capital Access: The Role of Crowdfunding. Calif. Manag. Rev. 2016, 58, 72–87. [Google Scholar] [CrossRef]
  31. Miglo, A.; Miglo, V. Market imperfections and crowdfunding. Small Bus. Econ. 2019, 53, 51–79. [Google Scholar] [CrossRef] [Green Version]
  32. Alaei, S.; Malekian, A.; Mostagir, M. A Dynamic Model of Crowdfunding. In Proceedings of the 2016 ACM Conference on Economics and Computation, Maastricht, The Netherlands, 24–28 July 2016; p. 363. [Google Scholar]
  33. Steigenberger, N. Why supporters contribute to reward-based crowdfunding. Int. J. Entrep. Behav. Res. 2017, 23, 336–353. [Google Scholar] [CrossRef]
  34. Gerber, E.; Hui, J. Crowdfunding: Motivations and Deterrents for Participation. ACM Trans. Comput.-Hum. Interact. 2013, 20, 1–32. [Google Scholar] [CrossRef]
  35. Ordanini, A.; Miceli, L.; Pizzetti, M.; Parasuraman, A. Crowdfunding: Transforming customers into investors through innovative service platforms. J. Serv. Manag. 2011, 22, 443–470. [Google Scholar] [CrossRef]
  36. Bento, N.; Gianfrate, G.; Groppo, S.V. Do crowdfunding returns reward risk? Evidences from clean-tech projects. Technol. Forecast. Soc. Chang. 2019, 141, 107–116. [Google Scholar] [CrossRef] [Green Version]
  37. Moysidou, K. Motivations to contribute financially to crowdfunding projects. In Open Innovation: Unveiling the Power of the Human Element; World Scientific Publishing: Singapore, 2016; pp. 283–318. [Google Scholar]
  38. Kuppuswamy, V.; Bayus, B.L. Crowdfunding Creative Ideas: The Dynamics of Project Backers. In The Economics of Crowdfunding. Startups, Portals, and Investor Behavior; Palgrave Macmillan: Cham, Switzerland, 2018; pp. 151–182. [Google Scholar]
  39. Solomon, J.; Ma, W.; Wash, R. Don’t wait!: How timing affects coordination of crowdfunding donations. In Proceedings of the 18th ACM Conference on CSCW, Vancouver, BC, Canada, 14–18 March 2015; pp. 547–556. [Google Scholar]
  40. Bandura, A. Human agency in social cognitive theory. Am. Psychol. 1989, 44, 1175–1184. [Google Scholar] [CrossRef] [PubMed]
  41. Colombo, M.G.; Franzoni, C.; Rossi-Lamastra, C. Internal Social Capital and the Attraction of Early Contributions in Crowdfunding. Entrep. Theory Pr. 2015, 39, 75–102. [Google Scholar] [CrossRef]
  42. Koning, R.; Model, J. Experimental Study of Crowdfunding Cascades: When Nothing is Better than Something. 2013. Available online: https://ssrn.com/abstract = 2308161 (accessed on 16 September 2020).
  43. Zaggl, M.A.; Block, J. Do small funding amounts lead to reverse herding? A field experiment in reward-based crowdfunding. J. Bus. Ventur. Insights 2019, 12, e00139. [Google Scholar] [CrossRef]
  44. Majid, S.; Lopez, C.; Megicks, P.; Lim, W.M. Developing effective social media messages: Insights from an exploratory study if industry experts. Psychol. Mark. 2019, 36, 551–564. [Google Scholar] [CrossRef]
  45. Herrera, H.; Hörner, J. Biased social learning. Games Econ. Behav. 2013, 80, 131–146. [Google Scholar] [CrossRef]
  46. Smith, L.; Sørensen, P. Pathological outcomes of observational learning. Econometrica 2000, 68, 371–398. [Google Scholar] [CrossRef] [Green Version]
  47. Welch, I. Sequential sales, learning, and cascades. J. Financ. 1992, 47, 695–732. [Google Scholar] [CrossRef]
  48. Avery, P.; Zemsky, P. Multidimensional uncertainty and herd behavior in financial markets. Am. Ec. Rev. 1998, 88, 724–748. [Google Scholar]
  49. Décamps, J.P.; Lovo, S. Informational cascades with endogenous prices: The role of risk aversion. J. Math. Ec. 2006, 42, 109–112. [Google Scholar] [CrossRef] [Green Version]
  50. Park, A.; Sabourian, H. Herding and contrarian behavior in financial markets. Econometrica 2011, 79, 973–1026. [Google Scholar] [CrossRef] [Green Version]
  51. Cong, L.W.; Xiao, Y. Up-Cascaded Wisdom of the Trowd. 2018. Available online: https://ssrn. com/abstract 3030573 (accessed on 23 October 2020).
  52. Chakraborty, S.; Swinney, R. Signaling to the Crowd: Private Quality Information and Rewards-Based Crowdfunding. Manufact. Serv. Oper. Manag. 2020. Available online: https://doi.org/10.1287/msom.2019.0833 (accessed on 23 October 2020).
  53. Mason, W.; Siddharth, S. Conducting behavioral research on Amazon’s Mechanical Turk. Behav. Res. Methods 2012, 44, 1–23. [Google Scholar] [CrossRef] [PubMed]
  54. Poalacci, G.; Chandler, J.; Ipeirotis, P.G. Running Experiments on Amazon Mechanical Turk. Judgm. Decis. Mak. 2010, 5, 411–419. [Google Scholar]
  55. Goodman, J.K.; Cryder, C.E.; Cheema, A. Data collection in a Flat World: The Strengths and Weaknesses of Mechanical Turk Samples. J. Behav. Decis. Mak. 2013, 26, 213–224. [Google Scholar] [CrossRef]
  56. Hauser, D.J.; Schwarz, N. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behav. Res. Methods 2016, 48, 400–407. [Google Scholar] [CrossRef]
  57. Peer, E.; Vosgerau, J.; Acquisti, A. Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behav. Res. Methods 2014, 46, 1023–1031. [Google Scholar] [CrossRef]
Figure 1. Book A’s choices (Situation 1) and Book C’s choices (Situation 2) in %, per treatment: without and with information. The total sample consists of 847 subjects (500 men and 347 women).
Figure 1. Book A’s choices (Situation 1) and Book C’s choices (Situation 2) in %, per treatment: without and with information. The total sample consists of 847 subjects (500 men and 347 women).
Sustainability 12 09827 g001
Table 1. Experimental design of Situation 1.
Table 1. Experimental design of Situation 1.
Situation 1 (Travel Books)
(Testing the Effect of Information about Early Backers)
Treatment I
without information
Treatment II
with information
Book ABook BBook ABook B
$525 raised
35 backers
$60 raised
4 backers
Table 2. Experimental design of Situation 2.
Table 2. Experimental design of Situation 2.
Situation 2 (Cookery Books)
(Testing the effect of peer and expert opinion)
Treatment I
without information
Treatment II
with information
Book CBook DBook CBook D
$425 raised
30 backers
$425 raised
30 backers
2 negative
peers’ reviews
2 positive
peers’ reviews
1 positive
expert’s review
1 negative
expert’s review
Table 3. Situation I. Testing the influence of early investment on investment decisions. Frequencies and p-values by gender and country.
Table 3. Situation I. Testing the influence of early investment on investment decisions. Frequencies and p-values by gender and country.
Panel A. Change in subject choice between Treatments 1 and 2 (with added information)
H0: A/B = B/AMenWomenMen + Women
CountryA/B 1B/AA/BB/AA/BB/A
USANumber68979413183
%6.3293.686.9393.076.6393.37
Proportion testp < 0.0001p < 0.0001p < 0.0001
IndiaNumber217215253697
%22.5877.4237.5062.5027.0772.93
Proportion testp < 0.0001p = 0.125p < 0.0001
USA + IndiaNumber271612211949280
%14.3685.6415.6084.4014.8985.11
Proportion testp < 0.0001p < 0.0001p < 0.0001
Panel B. Subject choice in Treatment 2 (with added information)
H0: A = BMenWomenMen + Women
CountryABABAB
USANumber1688215793325175
%67.2032.8062.8037.2065.0035.00
Proportion testp < 0.0001p < 0.0001p < 0.0001
IndiaNumber151995542206141
%60.4039.6056.7043.3059.3740.63
Proportion testp = 0.001p = 0.191p < 0.0001
USA + IndiaNumber319181212135531316
%63.8036.2061.1038.9062.6937.31
Proportion testp < 0.0001p < 0.0001p < 0.0001
1 A/B denotes subjects’ switching from funding Book A in Treatment 1 to funding Book B in Treatment 2 (with added information), and so on. Treatment 2: Prior to subject choices, Book A had achieved 10% of the required funds, whereas Book B had achieved 1.14% of the required funds.
Table 4. Situation II. Testing the influence of peer and expert opinions on the investment decision. Frequencies and p-values by gender and country.
Table 4. Situation II. Testing the influence of peer and expert opinions on the investment decision. Frequencies and p-values by gender and country.
Panel A. Change in subject choice between Treatments 1 and 2 (with added information).
H0: C/D = D/CMenWomenMen + Women
CountryC/D 2D/CC/DD/CC/DD/C
USANumber5238341357
%94.555.4595.404.6095.074.93
Proportion testp < 0.0001p = 0.017p < 0.0001
IndiaNumber49262897735
%65.3334.6775.6824.3268.7531.25
Proportion testp = 0.011p = 0.005p < 0.0001
USA + IndiaNumber101291111321242
%77.6922.3189.5210.4883.4616.54
Proportion testp < 0.0001p < 0.0001p < 0.0001
Panel B. Subject choice in Treatment 2 (with added information).
H0: C = DMenWomenMen + Women
CountryCDCDCD
USANumber8616483167169331
%34.4065.6033.2066.8033.8066.20
Proportion testp < 0.0001p < 0.0001p < 0.0001
IndiaNumber1211294354164183
%48.4051.6044.3355.6747.2652.74
Proportion testp = 0.613p = 0.267p = 0.315
USA + IndiaNumber207293126221333514
%41.4058.6036.3163.6939.3260.68
Proportion testp < 0.0001p < 0.0001p < 0.0001
2 C/D denotes subjects’ switching from funding Book C to funding Book D in Treatment 2 (with added information), and so on. Treatment 2: Prior to subject choices, Book C was recommended by an expert and criticized by peers, whereas Book D was recommended by peers and criticized by an expert.
Table 5. Results of the McNemar test
Table 5. Results of the McNemar test
Situation 1
Treatment 2 (with Added Information)
Treatment IBookABTotal
A251
29.6%
49
5.8%
300
35.4%
B280
33.06%
267
31.52%
547
64.58%
Total531
62.7%
316
37.3%
847
100%
Table 6. Results of the McNemar test.
Table 6. Results of the McNemar test.
Situation 2
Treatment 2 (with Added Information)
Treatment IBookCDTotal
C291
34.35%
212
25.03%
503
59.38%
D42
4.96%
302
35.66%
344
40.62%
Total333
39.32%
514
60.68%
847
100%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Comeig, I.; Mesa-Vázquez, E.; Sendra-Pons, P.; Urbano, A. Rational Herding in Reward-Based Crowdfunding: An MTurk Experiment. Sustainability 2020, 12, 9827. https://doi.org/10.3390/su12239827

AMA Style

Comeig I, Mesa-Vázquez E, Sendra-Pons P, Urbano A. Rational Herding in Reward-Based Crowdfunding: An MTurk Experiment. Sustainability. 2020; 12(23):9827. https://doi.org/10.3390/su12239827

Chicago/Turabian Style

Comeig, Irene, Ernesto Mesa-Vázquez, Pau Sendra-Pons, and Amparo Urbano. 2020. "Rational Herding in Reward-Based Crowdfunding: An MTurk Experiment" Sustainability 12, no. 23: 9827. https://doi.org/10.3390/su12239827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop