Next Article in Journal
Electoral Competition with Strategic Disclosure
Previous Article in Journal
The Power of Requests in a Redistribution Game: An Experimental Study

Games 2019, 10(3), 28; https://doi.org/10.3390/g10030028

Article
Is Your Privacy for Sale? An Experiment on the Willingness to Reveal Sensitive Information
1
Institute of Management and Economics, Clausthal University of Technology, Julius-Albert-Str. 2, 38678 Clausthal-Zellerfeld, Germany
2
Institute of Economics, University of Kassel, Nora-Platiel-Str. 4, 34127 Kassel, Germany
3
School of Business and Economics, University of Marburg, Am Plan 1, 35032 Marburg, Germany
*
Authors to whom correspondence should be addressed.
Received: 6 May 2019 / Accepted: 2 July 2019 / Published: 5 July 2019

Abstract

:
We investigate whether individuals’ self-stated privacy behavior is correlated with their reservation price for the disclosure of personal and potentially sensitive information. Our incentivized experiment has a unique setting: Information about choices with real implications could be immediately disclosed to an audience of fellow first semester students. Although we find a positive correlation between respondents’ willingness to accept (WTA) disclosure of their private information and their stated privacy behavior for some models, this correlation disappears when we change the specification of the privacy index. Independent of the privacy index chosen we find that the WTA is significantly influenced by individual responses to personal questions, as well as by different decisions to donate actual money, indicating that the willingness to protect private information depends on the delicacy of the information at stake.
Keywords:
privacy; personal data; social disapproval; WTA experiment; donation experiment

1. Introduction

Privacy is a fundamental human right; but is yours for sale? This is a question that consumers, especially of seemingly ‘free’ online services, face more and more frequently. Oftentimes, people state that their privacy is highly important to them, but when revealing their actual behavior, e.g., when using social networks or mobile tracking devices, private information is disclosed without further ado. This intention-action gap between stated preferences and actual behavior, commonly referred to as ‘privacy paradox’, has intrigued researchers from different scientific fields since the early 2000s (see Reference [1] for an overview). Dealing with the issue of privacy is generally difficult: Privacy is subjective, idiosyncratic and intertemporal; its value varies between different people and different information, and it may change over time [2]. This heterogeneity might be one reason why empirical evidence on the existence of the privacy paradox so far is mixed: Some of the literature supports its existence (e.g., Reference [3,4,5,6]), while others provide evidence challenging it (e.g., Reference [7,8,9]).
Economic experiments attempting to measure the monetary valuation of privacy do not aim to examine the privacy paradox problem directly. In the current literature, there is no standardized method of measuring the monetary value of privacy, and, hence, the degree of heterogeneity in findings is quite substantial. As pointed out in References [10,11], the mixed evidence on privacy valuation can be explained by three fundamental issues within the study setting: Incentives, saliency, and transparency. First, not all experimental studies were conducted in incentivized settings [12]. Secondly, some studies emphasized the topic of privacy throughout the experiment, while others made privacy issues less salient. Finally, whereas the uses of the data collected in some studies were clearly mentioned and transparent [10,13], in other studies, this information was kept ambiguous.
Another explanation for the diverse results in the privacy literature is the nature of the private information potentially to be revealed. Some studies focus on the reservation price (or respondents’ willingness to accept—WTA) disclosure of individual sensitive information, e.g., contact data [13], private address information, or Facebook data [10], whereas other studies focus on the willingness to reveal one’s bad deeds [12].1 In the experiment presented in this paper, we elicit both aforementioned types of information and additionally investigate whether people discriminate between their valuations for keeping “good” and “bad” decisions private. Moreover, while some studies correlate the desirability of a trait and the WTA for the information disclosure (see Reference [14]), to our knowledge there are as yet no monetary incentivized studies investigating information disclosure regarding morally afflicted actual behavior (e.g., donating or taking to/from an organization). When asking for reservation prices for the willingness to disclose data, from e.g., social networks or online shopping decisions, it is unclear how the actual content of the data potentially to be revealed differs between individuals. For example, it seems reasonable to assume that people on average would ask a higher reservation price for the disclosure of their data if there are many photos from a beach party on their Facebook timeline than if there are just a couple of photos from a hiking trip. Studies in which Facebook data or online shopping decisions can be revealed undoubtedly have a high degree of relevance to reality. However, it is difficult to control for content differences when analyzing such data. Therefore, this paper contributes and expands the existing literature by using an experimental design that monetarily quantifies the value that is individually assigned to the information that can potentially be revealed.
In the present study, we examine whether the privacy paradox holds when individuals decide about revealing potentially sensitive information about actual behavior. In addition to that, our study considers the incentive, saliency and transparency issues raised by Reference [10]. We conducted an experiment with first semester students during their university introduction week. First, we collected information regarding the subjects’ online privacy behavior, as well as their general attitude towards privacy concerns. Afterwards, we collected some private information from the participants, before they were given the opportunity to donate or take from three different organizations (henceforth DOT decisions). By including different types of organizations, we made sure that the DOT decisions had varying degrees of moral loading, potentially inducing image concerns in case they would be publicly revealed in front of the audience of fellow first semester students. In the final step of the experiment, we employed the Becker-DeGroot-Marschak mechanism (henceforth BDM; [15]) to elicit both the subjects’ minimum WTA to have their private information revealed (henceforth WTAdetails), as well as their minimum WTA to have their DOT decisions revealed (henceforth WTADOT).
We find that self-stated privacy behavior is positively and significantly correlated with both WTAdetails and WTADOT when we use an additive index for privacy. We no longer observe a significant correlation when we calculate privacy indices based on factor analysis. Hence, we are not able to find conclusive evidence for the privacy paradox. We additionally detect that there is no significant difference between the WTAdetails and WTADOT. As expected, we find significant differences between DOT decisions for different organizations.
The remainder of this paper is organized as follows. Section 2 reviews relevant literature; Section 3 describes the experimental design, hypotheses and procedures; Section 4 presents the results and a robustness check; and Section 5 concludes.

2. Related Literature

Today, privacy is a more important topic than ever. Overall advances in information technology and the widespread distribution of the internet allows companies to collect vast amounts of user data, which in turn are combined and analyzed to derive valuable insights about markets, as well as individual consumers preferences. In this global digital environment, where personal data is increasingly used as kind of currency to pay for otherwise ‘free’ online services, more and more questions are being raised regarding the monetary valuation of personal data and privacy. As [2] point out, both the protection and sharing of personal information induces costs and benefits for individuals, as well as for society as a whole.
Our paper adds to the literature of the willingness to pay for privacy. Most closely related to our setting is the experiment by Reference [17]. They used the BDM to evaluate the willingness to share individuals’ private information with a varying share of other students. By collecting information about the participants’ address and body dimensions, they found that an increasing number of potential information receivers decreases the willingness to share one’s personal data. The social distance between the subjects only mattered for women with regards to their body dimensions. Whether the information to be disclosed is verified only had an effect on men. Another closely related study is Reference [10], who used the BDM to evaluate the willingness to share individuals’ personal data with a telecommunications company. They varied the potential information which could be transmitted to the telecommunications company and find that individuals have a higher willingness to sell contact data, compared to data from their Facebook account. Another finding is that a share of 10 to 20% of subjects refuse in general to sell their data, whereas a comparable share is willing to sell its data for a very small amount.
Various studies tried to investigate peoples’ monetary valuation of keeping personal information private. In one of the earliest empirical contributions to this issue, Reference [14] conducted a series of experimental auctions. They used information about their participants’ weight and age and asked them about the minimum amount they would be willing to accept in order to make that information public. Their findings reveal that the value of privacy depends on the type of information and that there is a higher valuation of the weight information if perceived as embarrassing. While Reference [14] used auctions, other studies employed different techniques to elicit the valuation of privacy. These include choice menus [8,18,19], the already mentioned BDM [10,17], open questions [20] or the indication of a fixed amount of money [10,13]. While in some empirical studies [10,17] personal data of participants could be disclosed, in other studies [21,22] the data to be disclosed were endogenously assigned to the participants in the experiment. Furthermore, various studies in the field of mechanism design [23,24,25] address the issue of adequate privacy compensation on the internet.
Other experimental studies tried to tackle the ‘privacy paradox’ directly. Reference [3] compared self-reported privacy preferences with actual behavior during online shopping and found that people tend to reveal a discrepancy between their stated privacy preferences and actual behavior, thereby providing evidence in favor of the ‘privacy paradox’. This is in line with the findings of [19], who conducted a field experiment in which individuals could either purchase DVDs from an online store where they had to reveal their monthly income and their date of birth, or from an online store where they had to reveal their favorite color and their year of birth. The shops differed in that DVDs in the first online store were one euro cheaper. They found that significantly more individuals purchased the cheaper DVDs, although after the experiment, they stated high privacy concerns, thereby revealing behavior in line with the ‘privacy paradox’. Contrasting evidence is provided by Reference [18], who tested the willingness to pay additional money for smartphone apps if the app did not reveal private information to third parties. They find that if the default app exhibits a high level of privacy, individuals tend to buy a high-level privacy setting with a higher likelihood. In addition, they find that individuals who stated a high concern for privacy before the experiment, are more likely to purchase the app with a high privacy level. Reference [26] reviews the literature on studies investigating the privacy paradox and explains the diversity in research results with the heterogeneity in research methods and contexts, as well as different conceptualizations of the ‘privacy paradox’, while generally arguing for revealed - instead of stated - preference methods.
Potential explanations for exhibiting behavior in line with the ‘privacy paradox’ are consumer convenience (e.g., a smartphone application that works better by sharing the location information), preferences for saving time (e.g., users who do not spend sufficient time looking for safer options) or to have better access to social networks (e.g., Facebook) [27].

3. Method

3.1. Experimental Design

The experiment consisted of six stages. In the instructions (see Supplementary Materials), we informed participants about the rules of the experiment and that three participants would be randomly selected to receive payments of at least €100 each at the end of the experiment. We also asked the participants to answer all questions truthfully and according to their personal opinion. In the second stage, participants answered a short questionnaire about their attitudes regarding privacy and their personal behavior related to privacy and data security on the internet. The questionnaire contained seven items which had to be answered on a five-point Likert scale. As far as we know, no adequate index for online privacy behavior exists yet. We, therefore, created a provisional index with only a few items on real life privacy behavior that should theoretically cover the concept. In order to avoid measurement artifacts through acquisition, we rescaled three of the items so that the subjects achieve a lower index value through more agreement.
During the third stage, we asked the participants three questions on ‘personal details’ which were not related to each other. We tried to elicit information which could potentially be uncomfortable to reveal in the presence of others. At the same time, we did not want to make the content of the answers so unpleasant as to reveal that subjects would possibly not sell at any price. The questions were: (1) ‘Would you like to lose weight?’ (henceforth, weight question); (2) ‘Do you smoke?’ (henceforth, smoke question); and (3) ‘What was the average final grade of your last school leaving certificate (e.g., Abitur)?’ (henceforth, grade question). In this stage, as well as in all other relevant stages of the experiment, we made it clear that those participants with invalid or missing answers would lose their chance of getting paid. In stage 2 (privacy questionnaire), as well as in stage 3 (personal details) we tried to select the questions in a way that they seemed to be somewhat familiar to a majority of our participants. At the same time, we only asked a limited number of questions in order to keep the experiment simple and concise and to ensure the attention of our participants.
The fourth stage contained the DOT decisions. The purpose of this stage was to collect data on monetary allocation decisions with a moral component. First, we informed the participants that they were endowed with an amount of €150 and that three organizations were endowed with amounts of €50 each. Afterwards, the subjects could choose to donate to or to take from each organization any amount between €0 and €50, whereas only one decision was to be realized in the end. Participants could also explicitly indicate that they neither wanted to donate nor take at all. We used the strategy method [28] and asked the participants to make allocation decisions when matched to three different organizations: The KSV Hessen Kassel (Kasseler Sport-Verein; a local soccer club), the ILGA (International Lesbian, Gay, Bisexual, Trans and Intersex Association), and the UNHCR (United Nations Refugee Agency). Undoubtedly, endowing all individuals with an amount of €150 might affect their DOT decisions. However, since all participants were endowed with the same amount, this would only result in a level effect, whereas differences in the DOT decisions can be attributed to participants’ individual characteristics.
As the results of Reference [29] show, donations in a dictator game are predominantly zero as soon as the subjects have a “take” option in addition to a “donate” option. Hence, we used different organizations with different goals in order to increase the chance that there would be a variation in the DOT decisions between the participants. Reference [10] point out that it is unclear how and if students will handle the information made public about other students. Since in our experiment, the participants were first semester students, we assumed that there would be particularly strong image concerns involved, because of the importance of making a good first impression.
In the fifth stage, the participants were introduced to the possibility of selling information about their previous statements and decisions (stage three and four). We, therefore, used the BDM, which is a standard incentive compatible method for eliciting participants’ WTA. We offered subjects the possibility to buy the right to read out publicly two pieces of information: The minimum amount that we should pay them to reveal (a) their answers in the “personal details” section (WTAdetails), and (b) their DOT decisions (WTADOT) in front of the audience. We determined our maximum offer for each piece of information by randomly choosing one of three envelopes containing possible prices. If our maximum offer for a piece of information exceeded the valuation stated by the participant, then she (or he) would be paid the amount that was written on the sheet inside the envelope, and her information would be read out publicly, while she had to stand up to guarantee that all participants could see her. To guarantee that the participants understood the BDM mechanism, we demonstrated it beforehand by illustrating an exemplary transaction. In the instructions, we clearly stated that entering WTAs above €100 would result in the non-disclosure of the information in question. Thus, similar to References [17] and [10], the random draw of the BDM in our experiment had an upper bound. Reference [30] find that specifying an upper bound when using the BDM can have an anchoring effect on the decisions of the subjects. However, as [10] point out, this anchoring can work in both directions. Thus, we decided to specify an upper bound for WTAs, since otherwise, it might be difficult for subjects to state reasonable sales prices for the requested pieces of information. As [23] point out, the indication of a reservation price for the disclosure of private information already constitutes a disclosure of private information. Applied to our experiment, this implies that participants could indicate a higher WTA in order not to appear as if their privacy is not of great value to them.
The final stage consisted of a brief questionnaire for getting feedback from the experiment and collecting basic demographic information.

3.2. Hypotheses

We used our experimental design to test three main hypotheses. Our first hypothesis follows the findings of [26], that there is mixed evidence about the existence of a privacy paradox. Thus, we test whether the paradox holds in our setting, which would be reflected by a higher index of self-stated privacy behavior not affecting the WTAdetails and WTADOT decisions. Thus, our first main hypothesis is:
Hypothesis 1 (H1).
The index of self-stated privacy behavior is not positively correlated with the WTAdetails and WTADOT.
For the next hypothesis, we focus on the answers in the ‘personal details’ section. Regarding the weight-question, we expect a positive answer to lead to an increase in WTAdetails. The possibility that the stated desire to lose weight will be read out publicly is likely to create an unpleasant feeling among many participants. Regarding the smoke-question, again, we expect a positive response to lead to an increase in WTAdetails, since there is a social norm against smoking. For the grade-question, we also expect that the indication of a poor final school grade will lead to an increase in WTAdetails. Therefore, our next hypothesis is:
Hypothesis 2 (H2).
Stating the desire to lose weight, being a smoker and indicating a poor high school grade is positively correlated with WTAdetails.
Our final hypothesis is related to the participants’ DOT decisions. Participants were already endowed with a sizeable monetary amount at the start of the DOT decision section. This would have made a take-decision at least appear greedy or even brash, leading to an incentive to hide such decisions. Thus, we expect that taking will lead to an increase in WTADOT. Conversely, giving an additional amount to one of the organizations would have most likely been regarded as generous or even as exemplary, so people showing such behavior would have had less incentive to hide such decisions. Thus, we assume that donating to the organizations would generally induce a decrease in WTADOT. The corresponding hypothesis is:
Hypothesis 3 (H3).
Taking (giving) from (to) the individual organizations, leads to an increase (decrease) in WTADOT.

3.3. Experimental Procedure

The experiment took place at the University of Kassel during the introductory week for new students (October 2018). We ran two sessions of approximately 30 min with pen and paper, and in each session, only three randomly chosen participants received a real payoff. In total 105 students (44.76% female, 54.29% male, 0.95% not specified) participated in the experiment. In the first session, we had 61 freshmen from the Economics and Engineering Bachelor Program and in the second session 44 from Politics, Sociology or History with a minor in economics. The average age of the participants was 20.7 years, whereas 71.84% were younger than 22. Due to incomplete questionnaires, we had to drop six observations. Subsequently, in total, we have 99 observations for our analysis. All participants got a small gift (bike saddle cover or a pen) for participating in the study. The average payoff per person was €8.94. Based on the results of the experiment, we donated €0.00 to the KSV, €165.00 to the ILGA and €200.00 to the UNHCR. As stated in the instructions, we published the donated amounts on the university homepage of Björn Frank.2 In order to prevent participants from inferring the decisions of the winners, we added a small additional amount of money to the individual donations.
To preserve anonymity, we gave each participant a card with an identification number and instructions about when and where to collect their respective payment. We distributed sheets of paper with instructions for the first three stages. Once everybody had filled out the respective questions of these stages, we collected the completed forms. After that, we explained and demonstrated the BDM. We then distributed instructions for the remaining stages. After collecting the completed questionnaires, one of the students randomly drew the two maximum offers for the right to reveal personal details and the DOT decision. Afterwards, the three participants who received the actual payment and the organizations that would match each of the selected participants were randomly drawn. We only revealed the information of the participants whose reservation prices were lower than the selected prices. In session one, we revealed the personal details of one winner. In session two, we revealed the personal details, as well as the DOT decisions of one winner.

4. Results

4.1. Descriptive Statistics

Average responses for questions indicating individuals’ privacy preferences are reported in Table 1. We distinguish between self-stated privacy behavior (questions a1–a5) and stated privacy attitudes (questions a6–a7). Since we find low variation for privacy attitudes, we constructed an index for privacy concerns consisting of an average of all five privacy behavior questions (a1–a5).
The mean amount which was donated/taken was €−3.64 to KSV, €16.62 to ILGA and €34.80 to UNHCR. All means are significantly different from each other (all p-values < 0.01) and we, therefore, see strong signs of discrimination between the three organizations by our participants. As can be seen in Figure 1, the distribution of amounts donated/taken varied strongly between the three organizations. Among our participants a share of 38% decided to take, and 32% decided to neither take nor donate to the KSV; 15% decided to take, and 22% decided to neither take nor donate to the ILGA; and 5% decided to take, and 10% decided to neither take nor donate to the UNHCR.
Over all the organizations, a share of 45.45% (88.89%) of our participants decided to take (donate) from (to) at least one organization, and 10.10% (66.67%) of our participants decided to take (donate) money from (to) at least two organizations. The mean amounts for WTADOT were never significantly different within these groups (all p-values > 0.1). In Section 4.2, we conduct a more detailed analysis of amounts donated/taken to/from the individual organizations.
Figure 2 shows distributions for WTAdetails and WTADOT. The average WTAdetails was €48.17, and the average WTADOT was €50.45. Eleven individuals (10.89%) chose a WTAdetails that clearly indicates their preference to exclude the possibility that their personal details would be revealed, whereas 13 (12.87%) indicated the same for their WTADOT. It was also clear to participants that any stated value above €100 would definitely not lead to a disclosure of their personal details or their DOT decisions in the experiment. Since we did not want to lose these observations and we did not want outliers to bias our results substantially, we substituted all prizes above €100 with €101 (in the following we also conducted tobit regressions). A Wilcoxon matched-pairs signed-ranks test revealed that there are no significant differences between the distribution of WTAdetails and WTADOT (p = 0.300).3

4.2. Main Regression Results

Table 2 shows the results for an OLS-regression using robust standard errors where the dependent variable is either the WTAdetails (Model 1–3) or the WTADOT (Model 4–6). The privacy variable in Table 2 is constructed from the additive index of all privacy behavior questions (a1–a5) described in Section 4.1. The results of these OLS regressions do not support H1, since we find that an increase of the index for privacy behavior by one unit on average results in an increase of WTAdetails between ~€11.33 and ~€12.07, and an increase of WTADOT between ~€9.73 and ~€9.78. This effect is robust across different model specifications, although the significance and effect size drop slightly. However, we observe this effect is no longer significant in Model (6). Additionally, we find only partial support for H2. Stating the desire to lose weight increases the corresponding WTAdetails roughly between ~€13.60 and ~€16.20. However, we find no support for our hypothesis that any information besides the desire to lose weight influences WTAdetails. Stating that one smokes, or a poor high school grade does not significantly influence WTAdetails. As already pointed out in the descriptive statistic section, we find a significantly differing donation behavior among the different organizations. The overall donation is the highest for the UNHCR, followed by ILGA and then KSV. The overall donation-decisions are significantly different from each other (Wilcoxon matched-pair test, p < 0.01). Furthermore, for Model (5) in Table 2 we observe that decreasing the donated amount to the ILGA by one euro increases the minimum WTADOT by €0.26. So, we find evidence for H3 only concerning transfers to/from ILGA, but not for the other two organizations.

4.3. Robustness

Since our dependent variable for our main regression had lower (0) and upper (100) bounds, we also checked whether our results hold in a censored tobit regression, where we find that effect sizes and significances increase slightly (Table A1). However, instead of using an additive index for privacy concerns, we also conducted factor analysis and tested our hypotheses again using indices which contained different factor loadings of all questions regarding privacy (a1–a7, cronbach’s alpha = 0.54; Table A2) and also double checked by only using an index that provided us with the highest Cronbach’s alpha (a1, a3, a6, a7; alpha = 0.64; Table A3) to see whether our results hold across different specifications for privacy concerns. Although both indices constructed with factor analysis do not meet the critical value to be considered acceptable measures for a common underlying concept, we observe that all four variable specifications correlate reasonably well with each other. Thus, we are quite confident that we actually measure the same underlying concept (Table A4). We observe that our results change substantially if we use different variable specifications for privacy concerns since we no longer observe significant correlations between our indices and WTA’s, and, therefore, our results no longer falsify the existence of a privacy paradox in our setting (Table A5 and Table A6). Additionally, we find that the largest part of our result from the first regression using the additive index was driven by question a5, which asked participants whether they cover their webcam (Table A7). This result is not surprising, as the question of webcam coverage is more likely than the other questions to be answered with a very high or very low value on the Likert scale. The corresponding factor loading in the index using factor analysis is relatively low, such that the question regarding the covering of the webcam has less weight than other questions (Table A2). Therefore, we conclude that our results from the additive index should be interpreted carefully, and a more precise index is needed to measure individual preferences for privacy on the internet. In Table A1, Table A5 and Table A6 (Appendix A), we still find partial support for H2 and H3. For these model specifications, stating the desire to lose weight increases the minimum WTAdetails roughly between ~€15.05 and ~€19.21 and decreasing the donated amount to the ILGA by one euro increases the minimum WTADOT between €0.32 and €0.39.

5. Conclusions

We conducted an experiment to investigate whether the privacy paradox is prevalent in an analogue classroom setting. We evaluate the WTA bids to disclose private information and DOT decisions using the BDM and relate the bids to privacy concerns, elicited via survey items. We find that most subjects are willing to forego considerable potential earnings in order to protect both private details and DOT decisions. Unlike previous studies [10,13], data disclosure in our experiment has no real life equivalent. However, we think that the form of data disclosure we have chosen has resulted in participants having more similar expectations about how their data will be handled than if the data had been transferred to a physically more distant third party. In addition, the information disclosed in our study was only announced orally on one occasion and was not disclosed in written form as in References [10,17]. Therefore, the results of these studies and our results are only comparable to a limited extent. The prices for the willingness to reveal the personal details and the DOT decisions are significantly correlated with the stated privacy behavior when we use an additive index for privacy. However, this result is mainly driven by the individuals’ stated decision to cover up the webcam of their laptop and should, therefore, not be overinterpreted. We find that delicate information, like stating the desire to lose weight and the DOT regarding the ILGA, significantly affects the willingness to reveal the information. A possible explanation for this behavior is that individuals try to avoid shameful exposure. Especially since our subjects were first semester students, they might have a particular interest in hiding possible embarrassing behavior in front of their new peers. However, we do not find conclusive and robust evidence for behavior which is not in line with the privacy paradox. Although we observe that individuals who state a higher preference for privacy in the questionnaire also demand a higher price for revealing their private information, this largely depends on the corresponding specification of our measure for privacy concerns. Future research would benefit from refined comprehensive indices measuring individual preferences for privacy on the internet.

Supplementary Materials

The following are available online at https://www.mdpi.com/2073-4336/10/3/28/s1, S1: Experimental instructions.

Author Contributions

Conceptualization, methodology, investigation, data curation, writing—original draft preparation, J.C., B.F., L.K., S.K., N.L., D.M., M.M.-L., N.T.D., M.N. and C.R.; formal analysis and visualization, J.C., L.K., D.M., M.N. and C.R.; writing—review and editing, J.C., B.F., L.K., D.M., M.N. and C.R.; project administration, J.C. and B.F.; supervision and funding acquisition, B.F.

Funding

This research was funded by the University of Kassel (Germany).

Acknowledgments

We thank Matthias Greiff, the participants of the Clausthaler Ökonomisches Oberseminar (Clausthal University of Technology) and the participants of the Colloqium Recht & Ökonomie (University of Kassel).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Tobit regressions of WTA on self-stated privacy behavior (additive a1-a5) and information at stake.
Table A1. Tobit regressions of WTA on self-stated privacy behavior (additive a1-a5) and information at stake.
(1)(2)(3)(4)(5)(6)
WTAdetailsWTAdetailsWTAdetailsWTADOTWTADOTWTADOT
privacy12.38 **11.48 **11.76 **12.15 *12.39 *10.19
(5.80)(5.74)(5.89)(6.78)(6.69)(6.82)
weight 15.05 *17.83 **
(8.41)(8.74)
smoke 7.987.28
(10.28)(10.37)
grade −2.81−2.27
(7.13)(7.19)
KSV 0.130.14
(0.14)(0.14)
ILGA −0.34 **−0.32 *
(0.17)(0.18)
UNHCR 0.060.07
(0.20)(0.20)
age −0.57 3.55 *
(1.77) (2.11)
female −4.45 7.81
(8.97) (10.26)
session10.9912.4411.3619.91 **17.20 *17.93 *
(8.41)(8.73)(9.43)(9.72)(9.75)(10.52)
Constant10.0811.3222.838.2613.68−57.99
(16.95)(26.36)(44.55)(19.81)(20.17)(46.31)
sigma38.95 ***38.06 ***38.09 ***44.66 ***43.64 ***43.27 ***
(3.28)(3.20)(3.25)(3.92)(3.83)(3.85)
Observations949492949492
pseudo R20.010.010.010.010.020.02
Robust standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. Out of 99 individuals, we lose five in the first four model specifications since some participants chose “I don’t know/don’t want to answer” for some of the privacy questions. Additionally, we lose two more observations in Model (3) and (6) because two participants did not report their age.
Table A2. Factor loadings using all privacy questions.
Table A2. Factor loadings using all privacy questions.
VariableFactor Uniqueness
a10.490.76
a20.260.93
a30.340.88
a40.530.72
a50.120.98
a60.670.55
a70.580.66
eigenvalue1.51
alpha0.54
Table A3. Factor loadings using Factor with highest alpha.
Table A3. Factor loadings using Factor with highest alpha.
VariableFactorUniqueness
a10.490.76
a40.510.74
a60.650.58
a70.580.66
eigenvalue1.25
alpha0.65
Table A4. Correlation between different privacy concern specifications.
Table A4. Correlation between different privacy concern specifications.
IndicesFactor Analysis (a1–a7)Factor Analysis (a1, a3, a6, a7)Additive (a1–a7)Additive (a1–a5)
factor analysis (a1-a7)1.00
factor analysis (a1, a3, a6, a7)0.98 ***1.00
additive (a1–a7)0.89 ***0.81 ***1.00
additive (a1–a5)0.73 ***0.62 ***0.95 ***1.00
Robust standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01.
Table A5. Tobit regressions of WTA on self-stated privacy behavior (factor analysis a1-a7) and information at stake.
Table A5. Tobit regressions of WTA on self-stated privacy behavior (factor analysis a1-a7) and information at stake.
(1)(2)(3)(4)(5)(6)
WTAdetailsWTAdetailsWTAdetailsWTADOTWTADOTWTADOT
privacy0.964.705.026.518.791.92
(23.03)(22.27)(22.56)(24.97)(24.96)(24.68)
weight 16.94 *19.21 **
(8.71)(9.30)
smoke 7.747.29
(10.66)(10.84)
grade −4.44−3.80
(7.04)(7.11)
KSV 0.110.11
(0.14)(0.14)
ILGA −0.35 **−0.32 *
(0.17)(0.18)
UNHCR 0.090.09
(0.23)(0.23)
age −0.33 3.85 *
(1.74) (1.96)
female −1.65 10.69
(8.96) (9.61)
session12.7913.5113.5521.42 **18.38 *19.98 *
(8.47)(8.64)(9.16)(9.48)(9.62)(10.39)
Constant43.01 ***43.61*48.7537.81 **42.13 ***−40.17
(13.20)(24.00)(40.67)(14.83)(15.93)(40.34)
sigma40.17 ***39.09 ***39.14 ***45.48 ***44.47 ***43.83 ***
(2.98)(3.04)(3.14)(3.25)(3.28)(3.30)
Observations949492949492
pseudo R20.000.010.010.010.010.02
Robust standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. Out of 99 individuals, we lose five in the first four model specifications since some participants chose “I don’t know/don’t want to answer” for some of the privacy questions. Additionally, we lose two more observations in Model (3) and (6) because two participants did not report their age.
Table A6. Tobit regressions of WTA on self-stated privacy behavior (factor analysis a1, a3, a6, a7) and information at stake.
Table A6. Tobit regressions of WTA on self-stated privacy behavior (factor analysis a1, a3, a6, a7) and information at stake.
(1)(2)(3)(4)(5)(6)
WTAdetailsWTAdetailsWTAdetailsWTADOTWTADOTWTADOT
privacy−5.82−1.52−1.345.125.65−0.90
(22.91)(22.35)(22.67)(24.18)(24.03)(23.77)
weight 17.03 **19.10 **
(8.37)(8.97)
smoke 5.405.04
(10.62)(10.80)
grade −4.16−3.58
(6.71)(6.86)
KSV 0.140.14
(0.14)(0.14)
ILGA −0.39 **−0.36 **
(0.17)(0.18)
UNHCR 0.090.10
(0.22)(0.22)
age −0.41 3.61 *
(1.71) (1.90)
female −1.13 12.42
(8.82) (9.23)
session17.04 **17.65 **17.99 **23.35 **20.22 **22.45 **
(8.19)(8.45)(9.02)(9.11)(9.23)(9.98)
Constant44.13 ***43.65 *50.1837.92 **43.43 ***−35.20
(14.17)(23.75)(40.27)(15.46)(16.18)(39.76)
sigma40.22 ***39.17 ***39.22 ***45.04 ***43.82 ***43.10 ***
(2.88)(2.93)(3.02)(3.11)(3.13)(3.16)
Observations999997999997
pseudo R20.000.010.010.010.010.02
Robust standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. We lose two observations in Model (3) and (6) because two participants did not report their age.
Table A7. OLS regression of WTA on privacy behavior.
Table A7. OLS regression of WTA on privacy behavior.
(1)(2)
WTAdetailsWTADOT
a1−0.91−0.70
(3.58)(3.54)
a20.22−0.47
(3.22)(3.39)
a32.710.03
(2.93)(3.02)
a4−1.391.32
(3.70)(3.69)
a58.38 ***7.18 ***
(1.94)(2.13)
Constant25.95 *29.28 *
(15.47)(17.28)
Observations9494
F4.282.35
R20.190.12

References

  1. Acquisti, A.; Brandimarte, L.; Loewenstein, G. Privacy and human behavior in the age of information. Science 2015, 347, 509–514. [Google Scholar] [CrossRef] [PubMed]
  2. Acquisti, A.; Taylor, C.; Wagman, L. The economics of privacy. J. Econ. Lit. 2016, 54, 442–492. [Google Scholar] [CrossRef]
  3. Spiekermann, S.; Grossklags, J.; Berendt, B. E-privacy in 2nd generation E-commerce. In Proceedings of the 3rd Acm Conference on Electronic Commerce, Tampa, FL, USA, 14–17 October 2001. [Google Scholar]
  4. Acquisti, A.; Gross, R. Imagined communities: Awareness, information sharing, and privacy on the Facebook. In International Workshop on Privacy Enhancing Technologies; Springer: Berlin, Heidelberg, 2006; pp. 36–58. [Google Scholar]
  5. Acquisti, A.; Grossklags, J. Privacy and rationality in individual decision making. IEEE Secur. Priv. 2005, 3, 26–33. [Google Scholar] [CrossRef]
  6. Taddicken, M. The ‘Privacy Paradox’ in the Social Web: The Impact of Privacy Concerns, Individual Characteristics, and the Perceived Social Relevance on Different Forms of Self-Disclosure. J Comput. Mediat. Comm. 2014, 19, 248–273. [Google Scholar] [CrossRef]
  7. Son, J.-Y.; Kim, S.S. Internet Users’ Information Privacy-Protective Responses: A Taxonomy and a Nomological Model. MIS Quart. 2008, 32, 503–529. [Google Scholar] [CrossRef]
  8. Egelman, S.; Felt, A.P.; Wagner, D. Choice architecture and smartphone privacy: There’s a price for that. In The Economics of Information Security and Privacy; Springer: Berlin, Heidelberg, 2013; pp. 211–236. [Google Scholar]
  9. Blank, G.; Bolsover, G.; Dubois, E. A new privacy paradox: Young people and privacy on social network sites. In Proceedings of the Annual Meeting of the American Sociological Association, San Francisco, CA, USA, 17 August 2014. [Google Scholar]
  10. Benndorf, V.; Normann, H.-T. The willingness to sell personal data. Scand. J. Econ. 2018, 120, 1260–1278. [Google Scholar] [CrossRef]
  11. Tsai, J.Y.; Egelman, S.; Cranor, L.; Acquisti, A. The Effect of Online Privacy Information on Purchasing Behavior: An Experimental Study. Inf. Syst. Res. 2011, 22, 254–268. [Google Scholar] [CrossRef]
  12. John, L.K.; Acquisti, A.; Loewenstein, G. Strangers on a Plane: Context-Dependent Willingness to Divulge Sensitive Information. J. Consum. Res. 2011, 37, 858–873. [Google Scholar] [CrossRef]
  13. Plesch, J.; Wolff, I. Personal-Data Disclosure in a Field Experiment: Evidence on Explicit Prices, Political Attitudes, and Privacy Preferences. Games 2018, 9, 24. [Google Scholar] [CrossRef]
  14. Huberman, B.A.; Adar, E.; Fine, L.R. Valuating Privacy. IEEE Secur. Priv. 2005, 3, 22–25. [Google Scholar] [CrossRef]
  15. Becker, G.M.; DeGroot, M.H.; Marschak, J. Measuring utility by a single-response sequential method. Behav. Sci. 1964, 9, 226–232. [Google Scholar] [CrossRef]
  16. Chavanne, D. Generalized trust, need for cognitive closure, and the perceived acceeptability of personal data collection. Games 2018, 9, 18. [Google Scholar] [CrossRef]
  17. Schudy, S.; Utikal, V. ‘You must not know about me’ On the willingness to share personal data. J. Econ. Behav. Organ. 2017, 141, 1–13. [Google Scholar] [CrossRef]
  18. Dogruel, L.; Joeckel, S.; Vitak, J. The valuation of privacy premium features for smartphone apps: The influence of defaults and expert recommendations. Comput. Human Behav. 2017, 77, 230–239. [Google Scholar] [CrossRef]
  19. Beresford, A.R.; Kübler, D.; Preibusch, S. Unwillingness to pay for privacy: A field experiment. Econ. Lett. 2012, 117, 25–27. [Google Scholar] [CrossRef]
  20. Fuller, C.S. Is the market for digital privacy a failure? Public Choice 2019, 180, 353–381. [Google Scholar] [CrossRef]
  21. Benndorf, V. Voluntary disclosure of private information and unraveling in the market for lemons: An experiment. Games 2018, 9, 23. [Google Scholar] [CrossRef]
  22. Schudy, S.; Utikal, V. Does imperfect data privacy stop people from collecting personal data? Games 2018, 9, 14. [Google Scholar] [CrossRef]
  23. Ghosh, A.; Roth, A. Selling privacy at auction. Games Econ. Behav. 2015, 91, 334–346. [Google Scholar] [CrossRef]
  24. Jin, H.; Su, L.; Xiao, H.; Nahrstedt, K. Incentive mechanism for privacy-aware data aggregation in mobile crowd sensing systems. IEEE/ACM Trans. Netw. 2018, 26, 2019–2032. [Google Scholar] [CrossRef]
  25. Niu, C.; Zheng, Z.; Wu, F.; Tang, S.; Gao, X.; Chen, G. Unlocking the Value of Privacy: Trading Aggregate Statistics over Private Correlated Data. ACM 2018, 2031–2040. [Google Scholar] [CrossRef]
  26. Kokolakis, S. Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Comput. Secur. 2017, 64, 122–134. [Google Scholar] [CrossRef]
  27. Hann, I.-H.; Hui, K.-L.; Lee, S.-Y.T.; Png, I.P.L. Overcoming Online Information Privacy Concerns: An Information-Processing Theory Approach. J. Manag. Inf. Syst. 2007, 24, 13–42. [Google Scholar] [CrossRef]
  28. Selten, R. Die Strategiemethode zur Erforschung des eingeschränkt rationalen Verhaltens im Rahmen eines Oligopolexperiments. In Beiträge zur Experimentellen Wirtschaftsforschung; Sauermann, H., Ed.; JCB Mohr (Paul Siebeck): Tübingen, Germany, 1967; pp. 136–168. [Google Scholar]
  29. List, J. On the Interpretation of Giving in Dictator Games. J. Polit. Econ. 2007, 115, 482–493. [Google Scholar] [CrossRef]
  30. Bohm, P.; Lindén, J.; Sonnegård, J. Eliciting reservation prices: Becker-DeGroot-Marschak mechanisms vs. markets. Econ. J. 1997, 107, 1079–1089. [Google Scholar] [CrossRef]
1
In Reference [12], subjects fill out surveys with questions about their sexual behavior, among other things, and can provide their email address if they want to receive an evaluation. The study finds that a different survey header design influences the contents of the answers and the willingness to provide one’s own email address. These results are consistent with a study by Reference [16], whose findings also suggest that the willingness to disclose personal data is context-specific.
2
3
If we only look at the WTAs below €101, the average values are even closer to each other (WTAdetails = €41.84; WTADOT = €42.43; p = 0.380).
Figure 1. Distribution of DOT decisions for each organization.
Figure 1. Distribution of DOT decisions for each organization.
Games 10 00028 g001
Figure 2. Distribution of willingness to accept (WTA) bids across personal details and DOT decisions.
Figure 2. Distribution of willingness to accept (WTA) bids across personal details and DOT decisions.
Games 10 00028 g002
Table 1. Average scores for privacy preference indicators.
Table 1. Average scores for privacy preference indicators.
QuestionMean
(sd)
a1. Some websites use special tools to identify returning users and to provide them with personalized information (such as advertisements). If a website offered you such a service, would you agree? 1,43.53
(1.17)
a2. How often do you use the private mode (incognito mode) of your web browser? 22.42
(1.15)
a3. Do you use your real name on social networks (like Facebook)? 2,42.13
(1.23)
a4. Do you allow apps on your smartphone, tablet or laptop to determine your location and record it if necessary? 2,43.03
(1.09)
a5. Do you cover up your laptop’s webcam? 22.83
(1.76)
a6. How important is privacy to you in general? 33.69
(0.72)
a7. How important is the security of your personal data to you? 34.01
(0.96)
1 We used a five-point Likert scale where 1 means “in no case”, 2 “rather not”, 3 “I don’t know”, 4 “rather yes”, 5 “in any case”. 2 We used a five-point Likert scale where 1 means “never”, 2 “rarely”, 3 “sometimes”, 4 “often”, 5 “always”. 3 We used a five-point Likert scale where 1 means “unimportant”, 2 “rather unimportant”, 3 “important”, 4 “rather important”, 5 “very important”. 4 Since our goal was to indicate preferences for privacy, we recoded the outcomes of these questions in the opposite direction to have all questions on a common scale.
Table 2. OLS regressions of WTA on self-stated privacy behavior (additive index of a1–a5) and information at stake.
Table 2. OLS regressions of WTA on self-stated privacy behavior (additive index of a1–a5) and information at stake.
(1)(2)(3)(4)(5)(6)
WTAdetailsWTAdetailsWTAdetailsWTADOTWTADOTWTADOT
privacy12.07 **11.33 **11.56 **9.73 *9.78 *7.99
(4.98)(4.89)(4.95)(5.84)(5.68)(5.87)
weight 13.63 *16.17 **
(7.46)(7.89)
smoke 7.246.57
(8.81)(8.93)
grade −2.13−1.53
(6.06)(6.07)
KSV 0.060.07
(0.11)(0.11)
ILGA −0.26 **−0.23
(0.13)(0.14)
UNHCR 0.030.03
(0.17)(0.17)
age −0.47 2.59 *
(1.53) (1.41)
female −4.03 6.11
(7.74) (7.92)
session18.089.618.6613.71 *11.1311.98
(7.21)(7.61)(8.19)(7.64)(7.94)(8.54)
Constant9.919.7119.0814.4119.49−32.87
(15.46)(24.03)(34.86)(17.56)(18.53)(30.67)
Observations949492949492
F3.552.411.882.842.222.70
R20.080.120.130.070.110.13
Robust standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. Out of 99 individuals, we lose five in the first four model specifications since some participants chose “I don’t know/don’t want to answer” for some of the privacy questions. Additionally, we lose two more observations in Model (5) and (6) because two participants did not report their age. KSV, Kasseler Sport-Verein Hessen Kassel, (a local soccer club); ILGA, International Lesbian, Gay, Bisexual, Trans and Intersex Association; UNHCR, United Nations Refugee Agency.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop