Next Article in Journal
The Rise of Hacking in Integrated EHR Systems: A Trend Analysis of U.S. Healthcare Data Breaches
Previous Article in Journal
Novel Actionable Counterfactual Explanations for Intrusion Detection Using Diffusion Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ransomware Splash Screens, Loss Aversion and Trust: Insights from Behavioral Economics

1
School of Accounting, Finance and Economics, De Montfort University, Leicester LE1 9BH, UK
2
Independent Researcher, UK
3
Economics and Management School, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv. 2025, 5(3), 69; https://doi.org/10.3390/jcp5030069
Submission received: 21 July 2025 / Revised: 25 August 2025 / Accepted: 29 August 2025 / Published: 5 September 2025
(This article belongs to the Section Security Engineering & Applications)

Abstract

Ransomware is a fast-evolving form of cybercrime in which a ransom is demanded to restore access to a victim’s encrypted files. The business model of the criminals relies on victims being willing to pay the ransom demand. In this paper we use insights from behavioural economics to see how the framing of a ransom demand may influence willingness to pay the ransom. We then report the results of an experiment in which subjects ( n = 93 ) were shown eight different ransom demand splash screens, based on well-known examples of ransomware. The subjects were asked to rate and rank the ransom demands on six criteria that included willingness to pay and willingness to trust the criminals. This allows a within-subject comparison of different ransom demand frames. We find that trust is the main determinant of willingness to pay. We also find that positive framing is likely to increase willingness to pay compared to negative framing.

1. Introduction

Ransomware (or more formally crypto-ransomware) is the branch of malware that, after infecting a computer, encrypts and deletes original data files before demanding a ransom to recover access to the files [1,2,3,4,5]. While early variants of ransomware were typically amenable to reverse engineering [6,7], we have now seen many variants that are cryptographically robust meaning that the files cannot be recovered without the private keys held by the criminals [8]. Indeed, robust ransomware is now available as a service (RaaS) for use by malicious actors who have little technical knowledge [9]. This means that ransomware provides a relatively simple business model for criminals to earn significant illicit gains [10].
Cryptolocker was one of the first, if not the first, to implement ransomware in a technically sound way [11]. Cryptolocker was distributed using non-targeted phishing attacks and most victims were individuals. The ransom demand was typically in the region of USD 200–400. The precise number of Cryptolocker victims and proportion of victims who paid ransoms are unknown. There is, however, evidence that enough people paid the ransom to generate a large amount of money for the criminals. Conservative estimates put the amount of ransom at USD 300,000 [12] and USD 1,000,000 [13]. Cryptolocker was shut down in 2014, but this has proved to be the beginning rather than the end of the ransomware story. Other large-scale attacks have followed, and new families such as CryptoWall, TorLocker, Fusob, Cerber, TeslaCrypt, Ryuk, etc., have emerged [1]. In recent years, ransomware has become increasingly targeted at organisations and wealthy individuals [14,15,16]. The potential, however, for non-targeted phishing attacks that are primarily aimed at individuals remains a very live threat.
Given the threat of ransomware, and its negative impact on society, there is a pressing policy interest in analysing the ransomware business model of criminals in order to identify its weak points and pre-empt future attacks. This entails, among other things, understanding the factors that can influence a victim’s willingness to pay the ransom. This willingness to pay is influenced not only by objective factors, such as whether the victim had backups and/or the files were important to them, but also psychological and emotional factors, such as whether the victim experiences anger and/or is willing to engage with criminals. Extensive evidence from psychology and behavioural economics shows that financial decisions can be influenced by the way choices are presented or framed [17,18]. In this paper we explore whether victims’ willingness to pay a ransom is influenced by the way the ransom demand is framed by the criminals. The first point of contact between the criminal and victim comes when the ransomware reveals itself and makes the ransom demand. Typically, this comes in the form of a pop-up window, or similar, stating that files have been encrypted and a ransom must be paid. This pop-up window may only be the start of the process. For instance, modern ransomware strains typically come with a functioning customer service to ‘help’ the victim [19,20]. However, the framing of the initial ransom demand is a critical first stage in the relationship between the criminal and victim. Comparing across ransomware strains, it is readily apparent that there is a huge variation in the look and feel of ransom demands [21].
In exploring whether victims’ willingness to pay a ransom is influenced by the framing of the ransom demand, we apply insights from behavioural economics. A key theme in behavioural economics is that gain–loss framing matters, in the sense that an individual’s choice between options may depend on whether the options are presented as gains or losses. For example, Tversky and Kahneman [17] showed that people’s judgement between health interventions was dramatically influenced by whether the choice was framed in terms of ‘people will be saved’ or ‘people will die’. Similarly, choices between gambles are influenced by whether they are framed in terms of gaining money or losing money [22]. Gain–loss framing has been shown to influence online security behaviour, with loss-framed messages leading to more secure behaviour [23]. As we explain in Section 2, a ransom demand can use a gain frame, focusing on regaining access to files, or use a loss frame, focusing on the loss of files. Moreover, it is notable that some ransom demands seen in the wild focus on regaining access to files while others focus on loss of files. Therefore, differences in gain–loss framing are observed across ransomware strains. We present a simple theoretical model with which to analyse the potential impact of ransom framing on willingness to pay and demonstrate that there are non-trivial trade-offs when designing the ransomware frame. For example, there is a trade-off between a frame that is helpful and supportive (which, ceteris paribus, increases willingness to pay) and one that emphasizes the threat of losing the files (which, ceteris paribus, also increases willingness to pay).
We subsequently report the results of an experiment in which participants ( n = 93 ) were exposed to eight ransomware splash screens closely modelled on ransomware splash screens seen in the wild. Participants were asked to rate and rank each splash screen across six different measures, including willingness to pay, trust the criminals will provide a decryption key, and how positive they would feel about paying the ransom. We find that willingness to pay systematically depends on the way ransom demands are framed. The key mediating factors are trust and positivity. Therefore, overall, we find that willingness to pay is highest for positively framed ransom demands, e.g., CryptoWall, that contain helpful information and instil trust in the victim that they will recover their files. More negative and threatening demands, e.g., CryptoLocker, can only be effective if the victim has trust in the criminals to return access to the files.
Our findings can be of interest to researchers, policymakers and other stakeholders in better understanding the victim’s response to a ransomware attack. This can have two key benefits: (i) It allows pre-emption of future developments in ransomware. In particular, ransomware criminals appear adept at developing their strategy in order to maximize their financial gain. It can therefore be expected that the framing of ransom demands will evolve towards frames that increase willingness to pay. (ii) It allows more targeted advice to victims on how to respond to a ransomware attack, particularly if the aim is to delay and ultimately stop a ransom payment. For instance, we find there can be benefits in undermining a victim’s trust in the criminals.

Related Literature

There is a large body of work looking at the economics and game theory of ransomware (e.g., [24,25,26,27,28]). The majority of studies focus on rational, expected-utility maximizing behaviour (e.g., [27,29]). A ransomware attack is, however, a psychologically impactful event of victimization where behavioural factors are likely to be important [30,31]. There is a lack of theoretical work looking at the victim’s decision from a behavioural perspective. McIntyre and Frank [32] provide one exception, arguing that victim decision-making when faced with a ransomware attack is influenced by loss aversion. Similarly, Cartwright et al. [33] argue that loss aversion may result in a greater willingness to pay a ransom to recover files. Our study tests the impact of gain–loss framing by exposing participants to different frames and evaluating the impact.
The closest work to ours in the prior literature is that of Yilmaz et al. [34] (see also [35]), who also present an experimental study on the framing of ransomware splash screens. They exposed participants to three different ransomware splash screens, one text based, one graphical user interface (GUI) and one GUI with a count-down timer, and elicited a range of information on willingness to pay and actions the victim would take. They found no statistically significant impact of the splash screen on payment rate, reporting rate and adoption of good cybersecurity behaviour. We find a highly significant impact of the splash screen on willingness to pay and trust the criminals. In combination, these results suggest that ‘words matter more than graphics’. In particular, we systematically vary the wording of the ransom demand and find a significant effect consistent with our behavioural economic model of gain–loss framing. By contrast, Yilmaz et al. [34] varied the graphical interface, while keeping the wording fixed, and found no effect.
There are several studies which have looked at gain versus loss framing in terms of cybersecurity behaviour. For instance, Rodriguez-Priego et a.l [23] report an experiment designed to mimic online shopping. Participants were given either a gain-framed message that focused on the positive benefits of secure behaviour or a loss-framed message that focused on the negative consequence of insecure behaviour. They found that a loss-framed message led to more secure behaviour. Similarly, Sharma et al. [36] report an experiment in which participants had to decide whether to download new software. Different participants were exposed to differently framed messages, including a loss and gain frame. The authors found no influence of framing on behaviour. As a final example, Plachkinova and Menard [37] report an experiment in which participants were shown either a gain- or loss-framed video and then asked to evaluate their concerns with Internet-of-things devices in their home. They found that participants with low initial concerns were more influenced by a loss-framed message while those with high initial concerns were influenced equally by a loss- and gain-framed message. These studies illustrate that framing can influence cybersecurity behaviour, but the effect is highly dependent on context. In our setting, we are interested in how victims can be influenced by criminals while the cyberthreat communication literature has focused on how individuals can be influenced by benevolent actors.
We proceed as follows. In Section 2, we introduce a theoretical model to analyse gain–loss framing in ransom demands. In Section 3, we derive testable hypotheses on how gain–loss framing can influence willingness to pay. In Section 4, we discuss our experiment design, and in Section 5, we provide our main results. Section 6 concludes with additional details on the experiment contained in an Appendix.

2. Gain–Loss Framing and Ransom Demands

In this section we introduce a simple model to capture the channels through which a ransomware demand splash screen could potentially influence a victim’s willingness to pay a ransom. Various studies have analysed game theoretic models of ransomware (e.g., [24,25,26,27,28,29,38,39,40]). The focus of the prior literature has been on strategic factors that may influence willingness to pay, such as the presence of backups or insurance and the potential bargaining power of the two parties (e.g., [10,41,42,43]). Our model takes a complimentary approach in focusing on what we believe are likely to be the most important behavioural factors a criminal can use to influence the framing of the ransom demand. In doing so, we take as given the value an individual puts on their files and their relative bargaining power.
A key behavioural factor we explore is the distinction between a gain and loss frame. Abundant evidence from economics and psychology suggests that individuals are loss averse, meaning that an individual experiences a loss more severely than they would enjoy an equally sized gain [44,45,46,47,48]. This means that individuals have incentives to avoid losses and make choices that minimize the chances of loss. Crucially, losses and gains must be judged relative to a reference level and this reference level can be influenced by the framing of choice [49,50]. More specifically, a decision context can be framed in a way that a particular outcome is perceived as a loss, or reframed so that the same outcome is perceived as a gain [17]. The reframing of gains or losses has been widely shown to influence choice across a range of settings (e.g., [51,52,53]) including online security [23].
Here, we consider the implications for ransomware. In Figure 1, we illustrate how a ransomware demand could be framed to emphasize the gain or loss of files. The reference point in the gain frame is that the victim has internalized the loss of files. Thus, to recover access to the files would be perceived as a gain. The wording in Figure 1 from a splash screen highlights how the victim can recover their files by paying a ransom. By contrast, in a loss frame, we take as the reference point that the victim has not internalized the loss of access to files. In this case, to not recover access to the files would be perceived as a loss. Again, the wording in Figure 1 from a splash screen emphasizes loss. We shall discuss in Section 3 specific examples of ransom demands in the wild that have framing similar to both these gain and loss examples.
To formalise the gain–loss framing distinction, we introduce some notation. We take as given a value function v that maps outcomes, relative to the reference point, into an associated payoff. It is assumed that v ( 0 ) = 0 and v ( x ) is a strictly increasing function of x. Hence, positive outcomes are associated with positive payoff and negative outcomes with negative payoff. Consider a gain frame in which the victim has internalized the loss of files. We distinguish three possible outcomes along with the associated payoffs, summarised in Figure 1.
  • The victim does not pay the ransom and so stays without access to their files. Denote this NP for not pay. Without loss of generality, we set the payoff from NP at 0, denoted π G ( N P ) = 0 .
  • The victim pays the ransom and recovers access to their files. Denote this PR for pay and recover. In evaluating the payoff, we need to consider the value of the files, denoted W, as well as the ransom paid, denoted R. We also take into account that the victim may experience disutility on moral and ethical grounds from having paid a ransom to criminals. Denote this cost by E. The net payoff is then given by π G ( P R ) = v ( W R E ) .
  • The victim pays the ransom and does not recover their files. Denote this PN for pay and not recover. In this case, the victim again pays the ransom R and experiences disutility on moral grounds. In addition, we also take into account additional disutility from ‘anger’ at the criminals not honouring their promise to return the files. Denote this cost by A. The net payoff is then given by π G ( P N ) = v ( R E A ) .
The expected payoff from paying the ransom depends on the perceived probability of regaining access to the files. Denote this by p [ 0 , 1 ] . The expected payoff from paying the ransom is given by:
π G ( P a y ) = p v ( W R E ) + ( 1 p ) v ( R E A ) .
The victim will optimally pay the ransom if π G ( P a y ) > 0 or, equivalently, p v ( W R E ) > ( 1 p ) v ( R E A ) . In other words, the victim will optimally pay if the expected gain from recovering the files exceeds the expected loss from not recovering the files. The decision to pay a ransom is likely to have a large stochastic element that represent vagaries of the situation which are not modelled here. Even so, we can unambiguously say that the victim’s willingness to pay (for a fixed ransom demand R) will be increasing in p and W and decreasing in E and A. For more on the cost–benefit trade-offs of ransom payment, see Connolly and Borrion [30].
Consider next a loss frame in which the victim has not internalized the loss of files. We can distinguish the same three outcomes as in the gain frame:
  • The victim does not pay and so stays without access to their files. The payoff takes into account the loss of files, valued at W, hence π L ( N P ) = v ( W ) .
  • The victim pays the ransom and recovers access to their files. In this outcome the victim has paid the ransom and incurred the ethical cost. The net payoff is thus π L ( P R ) = v ( R E ) .
  • The victim pays the ransom and does not recover their files. In this outcome the victim has lost their files, paid the ransom and incurred the ethical cost and anger cost. The net payoff is thus π L ( P N ) = v ( W R E A ) .
The expected payoff from paying the ransom again depends on the perceived probability of regaining access to the files, p. The expected payoff from paying the ransom in a loss frame is given by:
π L ( P a y ) = p v ( R E ) + ( 1 p ) v ( W R E A ) .
The victim will optimally pay the ransom if the expected loss from paying the ransom is less than the loss from not paying the ransom, p v ( R E ) + ( 1 p ) v ( W R E A ) > v ( W ) . Again, the victim’s willingness to pay (for a fixed ransom demand R) is increasing in p and W and decreasing in E and A.
If we compare the gain and loss frame then it is possible to show, under reasonable assumptions on v and everything else held constant, that the victim has more incentive to pay the ransom in the loss than gain frame. To illustrate, consider a linear value function of the form
v ( x ) = x if x 0 λ x if x < 0
where λ measures the extent of loss aversion. Then, the maximum R for which it is optimal to pay the ransom can be explicitly calculated. In the gain frame, we set π G ( P a y ) = 0 and use Equation (1) to obtain condition p ( W R G E ) = λ ( 1 p ) ( R G + E + A ) . Solving for R G gives
R G = p W ( 1 p ) λ ( E + A ) p E p + ( 1 p ) λ .
In the loss frame, we set p λ ( R L + E ) ( 1 p ) λ ( W + R L + E + A ) = λ W . Solving for R L gives
R L = p W ( 1 p ) A E .
If λ > 1 , then R G < R L . The intuition for this result is that in the gain frame, the victim does not lose anything by not paying the ransom (because the loss is already internalized) while in the loss frame, they lose by not paying the ransom. Thus, loss aversion makes the victim more willing to gamble on the return of their files in the loss frame. Estimates of the loss aversion parameter, λ , are around 2 [48]. Therefore, we obtain a prediction that victims may be more willing to pay ransom demands if the criminals use a loss frame.

3. Manipulation of Ransomware Frames

Our model shows that if the criminal is motivated by financial gain, it is in their interest to increase the victim’s perception of W and p, decrease E and A and put the victim in a loss frame. Clearly, there are natural limits to how much the criminal can influence the victim’s willingness to pay through variations in the splash screen. For instance, a victim who recently performed a comprehensive backup is going to put a much smaller value on losing access to their files in an attack, W, than if they had not performed such a backup. We argue, however, the criminals can have an influence on p , E and A and the perception of gain or loss.
We begin with factors that may increase the victim’s perception on the probability of recovering a file, p. A professional-looking website, with a smooth interface, clear instructions on how to recover the files and allusion to customer service should increase the victim’s trust in recovery. By contrast, a very basic webpage with spelling and grammatical errors together with contradictory or incomplete advice may leave the victim thinking recovery is unlikely. The opportunity to recover some files for free is also something that could increase the perceived probability of recovering subsequent files. Evidence suggests that individuals often anchor perceptions on readily available information even if that information is superficial [54]. It would therefore seem vital that the criminals make the ransom splash screen look as ‘professional’ as possible [55,56,57].
Hypothesis 1.
The victim’s willingness to pay the ransom increases with the perceived quality of the splash screen as well as the clarity of instructions on how to recover the files, because this increases the estimated probability of recovering the files.
We turn our attention next to the moral and ethical dimension. Ransomware splash screens used in the wild vary widely between the use of threatening or supportive language. Threatening language is perhaps most expected (given this is a criminal attack) and exists. For instance, Cryptolocker included phrases such as ‘Any attempt to remove or damage this software will lead to the immediate destruction of the private key’. Much of the language in ransom demands is, however, relatively supportive. For instance, Cerber was similar to many in alluding to ‘a special software – Cerber Decryptor’. Petya claimed that ‘We guarantee that you can recover all your files safely and easily’ if you purchase the software. Such framing can create an illusory gap between the criminal who stole the files and the ‘service provider’ who is offering to recover those files. This converts the ransom demand into the purchase of ‘special software’. The victim may be more willing to pay a service provider for special software than they would a criminal’s ransom demand. Indeed, service friendliness can have significant influence on customer satisfaction and consequently sales [58,59,60]. A splash screen that focuses on supportive service may therefore decrease the ethical cost, E, of paying a ransom and the anger from not recovering files, A.
Hypothesis 2.
The victim’s willingness to pay the ransom increases with the perceived supportiveness of the splash screen, because this decreases the ethical cost of paying a ransom and the anger from not recovering files.
The distinction between threatening and supportive language also feeds into the framing of losses versus gains. Threats to ‘destroy files’ appeal to loss aversion because they emphasize the loss. Allusions to ‘recovery of files’, by contrast, do not appeal to loss aversion because the frame is more about gaining files that seem lost. Generally, individuals are more loss averse when the loss aspect is primed [61,62,63]. Thus, frames that emphasize destruction may make it more likely the victim will perceive a loss frame. There are other factors that may also convey a loss frame. One common ploy is to include a countdown timer on the splash screen beyond which the encrypted files are destroyed or the ransom demand increases. There is little evidence that this threat is credible. Even so, it may create a sense of urgency in the victim and consequently lead to bias in perceptions of the value of files. For instance, we know that time pressure can be used to manipulate customers [64,65].
Hypothesis 3.
The victim’s willingness to pay the ransom is higher with signals that the files will be destroyed, e.g., a countdown timer, because this increases the sense of loss.
Our three hypotheses suggest that there are ways of influencing a victim’s willingness to pay a ransom. Crucially, however, there are some non-trivial trade-offs. The most important trade-off we have identified is that between threatening and supportive language. We would argue that it is difficult to combine the kind of supportive language that would increase the perceived probability of returning the files (Hypothesis 1) and lower the ethical cost of paying a ransom (Hypothesis 2), with the kind of threatening language that would focus a victim’s mind on the loss of files (Hypothesis 3). Factors that increase p and decrease E and A may therefore be inconsistent with inducing a loss frame. This means we need to reconsider the extent to which a victim would be more willing to pay in a loss than gain frame. When arguing, in Section 2, that a victim had more incentive to pay in a loss than gain frame, we assumed everything else remained the same. We now need to consider what happens if p , E and A do not remain the same in a loss frame.
To illustrate, consider again the model from Section 2 and the cut-off values for paying the ransom as given by Equations (4) and (5). Suppose the perceived probability of regaining the files, p, and ethics and anger cost of paying, E and A, depend on the frame. We assume the perceived probability of regaining the files is higher with a gain frame, p G > p L , because of the supportive language. Similarly, the ethics and anger cost from paying is lower with a gain frame than a loss frame, E L > E G and A L > A G . Then, the victim is willing to pay a higher ransom in the gain frame, R L < R G , if
p L W ( 1 p L ) A L E L < p G W ( 1 p G ) λ A G p G E G ( 1 p G ) λ E G p G + ( 1 p G ) λ
If E G is sufficiently small and p G sufficiently large, then the impact of loss aversion, represented by λ > 1 , is negated. For instance, if the threatening language of a loss frame results in p L = 0 while the supportive language of a gain frame results in p G = 1 , then it is trivial that R L < R G .
To better understand the relative trade-offs, we calculated the value of R G and R L for a wide range of values of E and A with E G = E L , A G = A L , λ = 2 and W = 1 . We summarise our findings in Figure 2. In the first three subpanels, we plotted the value of R G (‘Gain’) and R L (‘Loss’) for three different combinations of A and E. Consistent with our previous analysis, you can see that for a fixed value of p, we have R L R G . However, our interest is in settings where p L < p G (because a loss frame results in less trust). For any value of p L , we can calculate the value of p G such that R L = R G . In other words, we can calculate the required difference in perceived probability p G p L such that the victim is willing to pay the same ransom in a loss and gain frame R L = R G . This ‘p difference’ is plotted (as a function of p L ) in the bottom right panel of Figure 2 for the three combinations of A and E. We see that this p difference is everywhere less than 0.2 . Thus, if the gain frame results in the victim putting an increased probability of 0.2 or more of regaining access to their files, then they will be willing to pay a higher ransom in the gain than loss frame. Given that a gain frame is likely to lead to a higher perceived probability of regaining files, we suggest that the gain frame will ultimately result in a higher willingness to pay the ransom.
Hypothesis 4.
The victim’s willingness to pay the ransom is higher in a gain frame than loss frame because the gain frame increases the estimated probability of recovering the files enough to offset a higher willingness to pay in a loss frame, ceteris paribus.
In Hypothesis 4, we consider a potential trade-off between a ransom frame and the victim’s perceived probability of recovering the files, p. The value of p can be influenced by the strategic interaction between the criminal and victim, and not solely by the ransom frame. For instance, the criminals can decrypt a sample file to signal the files can be recovered. Ransomware actors may also build a reputation over time for returning files to victims [66]. Our model abstracts away from such strategic considerations in focusing solely on the impressions of the victim when viewing the ransom demand. Everything else being the same, it can be expected that strategic considerations would lead to a reduction in the difference in perceived probability, p G p L , as initial impressions become replaced by more informed judgment. Thus, Hypothesis 4 should be seen as reflecting initial impressions, which likely influence decision making.

4. Experiment Design

In the experiment, participants were exposed to eight different ransomware splash screens. The splash screens were based on screenshots of genuine ransomware demands, attributed to ransomware strains: Cerber, Cryptolocker, WannaCry, CTBLocker, CryptoWall, Petya, Locky and TorrentLocker. These ransomware demands were chosen to reflect some of the main ransomware strains that have impacted individuals and to reflect the wide variety of ransom splash screens existing in the wild. While the splash screens were designed to be as close as possible to those observed in the wild, we deliberately modified some information to create homogeneity across the eight splash screens, namely, the ransom amount (GBP 300 rising to GBP 600 if not paid on time), the means of paying the ransom, and the use of correct spelling and grammar. Fixing these factors across splash screens allowed us to focus on differences in how the ransom demand was framed rather than, e.g., differences in ransom amount. (The splash screens, together with all the other information on the experiment, are available in Appendix B.)
At the beginning of the experiment participants were given a basic introduction to ransomware. Participants were then told the following: Imagine that you had on your computer some files that are very valuable and for which you do not have a backup. For instance, you have important work and sentimental photos. You would, without any hesitation, be willing to pay GBP 300 to, say, a friend or computer expert if they could recover your files. But your choice is whether to pay the ransom to the criminals. For each ransomware example, we want you to rate your attitudes to six different questions:
  • How likely would you be to pay the ransom? [WTP]
  • How likely do you think it is that the criminals will provide the key to decrypt your files? [Trust]
  • How fast do you think it is that you would make a decision? [Fast]
  • To what extent would this ransom demand make you feel angry? [Angry]
  • How helpful is the ransom demand in informing you about what has happened and what to do about it? [Helpful]
  • If you ultimately get your files back, how positive you would feel about paying the ransom? [Positivity]
The participants were then asked to rate each ransomware splash-screen example one by one. All questions were rated on a 10-item Likert scale. An example is provided in Figure 3. We used four different orderings of the eight splash-screen examples to control for any order effect. After participants had made independent ratings for the eight ransomware examples, we asked them to rank the examples from one to eight with respect to the six questions. For example, for question 1, we asked participants to ‘Please rank your willingness to pay the 8 ransom demands from most likely to pay (1) to least likely to pay (8)’. We also asked them to briefly comment on their choices for those they ranked first and last. An example is provided in Figure 4.
Table 1 summarises the differences in frames across the eight splash screens. We first identified a range of factors that were objectively determined: (1) The number of words in the demand; (2) whether or not the demand explicitly referred to buying ‘software’; (3) whether there was a time limit; (4) whether the price rose after a set time; (5) whether or not a free sample was offered; (11) text colour; (12) background colour; (13) whether there was use of a logo. We then identified a range of factors that were more subjective: (6 and 7) we scored the level of information provided (from 0 to 3) as a signal of ‘helpfulness’, distinguishing whether information was provided on how to pay (6) and on what encryption meant (7); (8) whether information was provided about the victim’s computer and files; (9) the number of positive phrases in the demand; (10) the number of negative phrases in the demand. Examples of positive phrases included ‘guarantee you will recover’ and ‘regain control’ while negative phrases included ‘destroy’ and ‘nobody will ever’.
As can be seen from Table 1, there was a large variation in splash-screen design. This reflected differences seen in the wild [21]. For example, Cerber has all the ‘gimmicks’, like a timer and price rise, but contains very little information. Petya has none of the gimmicks but more information. To look at another dimension, CyrptoWall and TorrentLocker have a positive frame while CryptoLocker and CTBLocker have a negative frame. You can see that half of the splash screens are framed as ‘software’ and all four of these use a positive frame. Five of the splash screens provide information on how to pay, four give information on encryption, and three provide information about the encrypted files. Therefore, there iswas considerable heterogeneity across all the characteristics we considered.
The experiment was set up in such a way as to fix intrinsic value across the eight ransomware demands. In particular, W > 300 . The experiment questions were then designed to probe the influence of trust, moral and ethical views and loss aversion. We took question 1 as a measure of overall willingness to pay. Question 2 probed beliefs on the likelihood the criminals would provide the decryption key and so was a measure of p. We refer to this as trust in the sense the victim believes the criminal is (i) capable of and (ii) intends to return access to the files. Question 3, in measuring how fast a decision would be made, and question 6, in measuring positivity at getting the files back, were measures of loss aversion. Finally, question 4, in measuring anger, and question 5, in measuring helpfulness of the demand, were measures of moral and ethical costs E and A.
The experiment took place on a university campus with subjects recruited from the student population. We recognize that this is a specific participant pool of young adults who were relatively technology literate. For instance, there is evidence that young adults are more willing to pay ransomware [42]. There were a total of 93 participants (48% male). Participants were paid a flat fee of GBP 5 for completing the experiment which took around 20–30 min. The experiment took place after subjects had participated in a separate and unrelated economic experiment. The experiment was conducted using pen and paper. A research assistant then converted the responses into a usable data file. A power analysis confirmed that n = 93 was sufficient to detect differences across frames. For instance, if the difference in average WTP ratings across two ransom splash screens was 1, and the standard deviation was 2, then a two-sided t-test with a significance level of 0.05 would have a power of 0.924 to correctly reject the null hypothesis that WTP is similar across splash screens.
A set protocol was used to deal with missing values or incomplete rankings from participants failing to enter all information. In terms of the rating data, each participant was required to rate six items across eight splash screens. This equates to 48 data points per participant, 744 observations per aspect, and 4464 data points in total. There were a total of seven missing values: three from trust, three from helpful and one from fast. In terms of ranking data, each participant was required to rank eight splash screens across six items. Again, this equates to 48 data points per participant and 4464 data points in total. There were 40 missing values: 5 from WTP, 5 from trust, 7 from fast, 8 from angry, 4 from helpful and 11 from positivity. In our analysis, we focused on completed data and therein assumed that missing data were random. This was consistent with our low rate of missing data.

5. Experiment Results

Table 2 reports the average rating of each ransomware splash screen across the six aspects (on a 10 item Likert scale). A higher rating represents higher willingness to pay, more trust in the criminal, faster decision, more anger towards the criminal, more helpful information, and more positive feeling after recovery. The eight ransomware splash screens are ordered based on participants being most likely to pay (CryptoWall) to least likely to pay (Locky). We saw a highly statistically significant variation in willingness to pay and trust ratings across the eight splash screens ( p = 0.0001 , Kruskal–Wallis test). This also held for how fast a decision would be made and perceived helpfulness and positivity on recovery ( p < 0.01 , Kruskal–Wallis test). The only factor that was not influenced by the frame was anger, which was universally high ( p = 0.79 , Kruskal–Wallis test). The individual comments of subjects confirmed the high level of anger: ‘I would be furious to receive any of these’, ‘I would be equally angry in each and every case, the design doesn’t affect my angriness’, ‘Treat them all the same, would always make me feel angry’, or ‘Anyone demanding money from me would make me angry’.
Result 1.
The splash screen of the ransom demand significantly influences willingness to pay, trust that the files will be returned, how fast a decision will be made, the perceived helpfulness of the demand, and perceived positivity if files are recovered. It does not influence anger.
Table 3 reports participants’ average rankings across the eight ransomware splash screens (we omitted missing observations; hence, the number of observations is less than the 93 participants in the experiment). Among the six rankings, a lower rank represents more likely to pay the ransom, more trust in the criminal, quicker to decide, more anger, more helpful information and more positive feelings after recovering the files. The eight ransomware splash screens are sorted by average ranking in WTP. The ordering of ransomware splash screens based on their WTP ranking is identical to that based on their ratings, except for Petya and Locky exchanging places. More generally, we see a high level of consistency between the ratings and rankings. Thus, in the following, we concentrate on the ratings data.
To explore the correlation of ratings across the six aspects, we used linear regression analysis, with standard errors robust for clustering at the individual level. The unit of observation was a subject’s rating of the ransomware demands across each of the six aspects. Table 4 presents the results. Specification (1) was used to examine the influence of other factors on a participant’s willingness to pay and can be seen as our preferred model. In particular, it provided the most direct test of our theoretical model. Specifications (2–6) allowed us to explore the potential relationship between other factors. Specification (1) in Table 4 shows a strong positive relationship between a participant’s willingness to pay the ransom and both trust and anticipated positivity. We found no statistically significant influence of speed of decision, anger or helpfulness.
Result 2.
Willingness to pay the ransom is strongly positively correlated with a participant’s trust the criminals will return the files and anticipated positivity if the files are returned.
Consistent with Hypothesis 1, Result 2 suggests that splash screens which increase the perceived probability of regaining access to the files and positivity trigger the highest willingness to pay. Figure 5 is a radar plot of participants’ average ratings of WTP and trust across the eight ransomware splash screens. The figure clearly illustrates that trust has a very close correlation with WTP. In short, among all aspects, trust has the strongest explanatory power for WTP. The self-reported reasons for participants’ choice of highest- and lowest-ranked WTP are also consistent with the importance of trust. For instance, one subject wrote of CryptoWall that ‘something is free and able to see if it is a scam’ and another of WannaCry that ‘Option to decrypt some files for free, indicated they would decrypt all for payment’. On the flip side a subject wrote of Locky that it ‘Doesn’t look real’ and another of Cerber that the ‘Design shows the encrypters are amateurs, think of other solutions before paying’.
Having seen that trust is strongly related to willingness to pay, we then looked at the factors which underpinned trust. Subjects reported highest trust ratings for CryptoWall, CryptoLocker and WannaCry and lowest ratings for Cerber, Locky and Petya. If we look at Table 1, we can see that the highest-ranked ransomware splash screens have, on average, more words, more gimmicks (timer, logo, price rise, free sample) and more information. If we quantify the 10 properties in Table 1 (excluding word count and colours) and add up all the features, then the most trusted ransomware splash screens scored 11, 10 and 8, respectively, whereas the lowest scored 4, 6 and 4, respectively. This is consistent with the notion that ransomware splash screens with more information are considered more trustworthy. Further evidence for this can be seen in specification (2) of Table 4, where trust is positively influenced by helpful. While causality is difficult to infer with confidence, it seems reasonable to infer that helpful information influences trust which then influences willingness to pay. This is consistent with Hypotheses 1 and 2.
Result 3.
Trust the files will be returned is positively correlated with the amount of information on the ransom demand splash screen and the helpfulness of the information.
Recall that trust in our setting captures both the capability and the intention of the criminals to return a victim’s access to their files. Participants’ comments, which are provided in full in Appendix A, appeared to capture both these elements. For instance, WannaCry was ranked highest for trust and when asked to comment on their choice, participants wrote, e.g., ‘information about file decryption’, ‘description of how to get files back’, ‘more of a solution’, and ‘informative details’. Petya was ranked lowest for trust, and participants commented, e.g., ‘unprofessional style’, ‘looks less professional’, ‘vague’, ‘doesn’t seem authentic’, and ‘looks like amateurs’. A focus on professionalism and information (or lack of it) can be seen to reflect both capability and intention in the sense that professionalism conveys an ability to decrypt the files and a reputational incentive to honour ransom payments.
Having looked at the role of trust, we then turned our attention to perceived positivity in the case of recovering files. From Table 4, we can see that positivity was not related to any of the other measured factors. This would suggest that positivity is a factor, independent of trust and helpful, that influences willingness to pay. From Table 2, we can see that CryptoWall, WannaCry and CTBLocker scored relatively highly on positivity while Petya scored the lowest. If we look at Table 1, however, it is difficult to discern any characteristic that explains these results. For instance, CryptoWall has positive and helpful information, while CTBLocker is a negative frame with less information. Similarly, CryptoWall and WannaCry offer a free sample, while CTBLocker does not. The comments of participants offer further perspective. For example, one comment on WannaCry was that it was ‘Professional scammer, could happen to anyone’, and one comment to CryptoWall was that it ‘looks like software’. By contrast, a comment on Petya was: ‘Seems most criminal, paying through emails feels like you’re directly paying the criminal’.
Finally, we looked at the fast aspect, that is, the speed of paying. Combined with the comments of participants on their reasons for choosing the most/least quickly to pay, we found that the design of the timer and count down in the ransomware demand appeared to facilitate victims’ quick response (e.g., WannaCry, CryptoWall, CryptoLocker and CTBLocker). On the other hand, a lack of information or timer (e.g., Locky) appeared to result in participants being less quick to pay. Variation in fast ratings was statistically significant across ransomware demands ( p = 0.033 , Kruskal–Wallis). However, we can see from Table 4 that the speed of the decision did not affect participants’ willingness to pay the ransom, and so we found little support for Hypothesis 3. Speed did appear to relate to anger. Specifically, ransomware splash screens that subjects more quickly reacted to were those that made participants less angry. This correlation was statistically significant ( p = 0.013 from post-estimation results of specification 4 in Table 4).
Result 4.
Ransomware splash screens can have significant influence on how quickly participants decide whether or not to pay the ransom. This influences perceived anger and not willingness to pay the ransom.

6. Discussion

Our theoretical model suggested a trade-off between a positive and negative frame. We argued that a negative frame may increase willingness to pay because it makes the victim think about the loss of their files, and loss aversion would then suggest incentives to pay the ransom. On the flip side, a positive frame may increase willingness to pay because it increases trust the files will be returned and decreases ethical and anger costs from paying the ransom. In our parameterized example, we found that a small increase in trust would be enough to offset the impact of loss aversion. This led to our key Hypothesis 4, that any increases in willingness to pay from the use of a loss frame would be offset by a corresponding decrease in perceived probability of the files being returned.
Our experiment results are consistent with Hypothesis 4 and suggest that a positive frame with induced trust, helpfulness and positivity increases willingness to pay. This would imply that splash screens with a positive, gain frame may lead to higher income for the criminals than a negative, loss frame. Our version of CryptoWall is a great example of this in that it had all the gimmicks (except a logo) together with a positive frame. Thus, it was no surprise that it was top ranked and top rated amongst our participants.
While our results suggest the splash screens with a positive frame may be adopted by criminals, there are some caveats, as evidenced by CrytpoLocker. CryptoLocker picked up second place in both the ranking and ratings but had fewer gimmicks and a highly negative frame. If we look at the comments of subjects, we find that the threats seemed to increase willingness to pay because they seemed credible. For instance, one subject wrote ‘Negative and serious language, company feel, so makes it seem more valid, threat at the end increases the likelihood that I pay’. Threats within a professional and credible looking frame may therefore work. Threats within a less credible frame seem particularly likely to backfire. Either way, the contrast between CryptoLocker and CryptoWall show that it is unlikely there is one set way of manipulating victims. Being friendly or threatening can increase a victim’s willingness to pay.

6.1. Limitations

Our theoretical framework focused on a static and partial equilibrium framework in which a victim is deciding whether to pay a ransom given their preferences and beliefs. There is the potential to extend our model to include a dynamic and strategic game-theoretic element in which the victim’s beliefs about the probability of regaining access to their files is updated over time. For instance, the victim’s beliefs may be informed by signals from the criminal, such as decryption of a sample file. The criminal may also have a reputation for returning access to files. Modelling such possibilities would allow a more informed understanding of the perceived probability of regaining the files, a crucial aspect of our model.
Our experimental design had several limitations that could be addressed in future work. First, we used a student sample, so our results should be interpreted as applying to young adults who are technologically literate. There is consistent evidence that experimental results obtained with students are similar to those obtained with the general population (in settings that are relevant to students), e.g., [67,68]. That said, a student sample can lead to bias if a characteristic that does not vary in the sample, i.e., age, influences the responses [68]. Survey evidence from a large sample of the UK general population showed that willingness to pay a ransom varied with age [42]. Of particular relevance to our study is the direction of any gain–loss framing effect. Blake et al. [69], surveying a representative sample of the UK population, found that loss aversion was highest for 18–24-year-olds and lowest for 45–54-year-olds. This would mean that students are strongly influenced by a loss frame, providing a ‘tough’ test of our crucial Hypothesis 4. Therefore, it seems reasonable to conjecture that our main findings (Results 1–4) extend to a general population. Future work, however, would be needed to test this conjecture across different demographics.
A second limitation of our experiment design is that a lab-based experiment cannot recreate the heightened emotions that would be experienced in a real ransomware attack. Our results should therefore be seen in the light of participants imagining how they would feel in the event of a ransomware attack. That said, our participants reported very high ratings of anticipated anger. They also reported relatively high ratings for willingness to pay. A priori, we would expect participants to underestimate their true willingness to pay because of the instinct to not pay a ransom to criminals. For instance, Cartwright et al. [39] give the example of a CISO discussing a ransomware exercise in which the conversation went quite quickly from a “well we definitely shouldn’t pay money to bad people”, to “we should definitely pay money”. It is unclear how a lack of emotional saliency would impact the relative ratings and rankings of splash screens. A third limitation of our experimental design is that we only considered eight splash screens, which varied across many different characteristics.

6.2. Future Research Directions

The most promising area for future research would be to extend our experimental design to systematically study specific characteristics of ransomware splash screens. We based our study on eight real splash screens, but this caused a loss of experimenter control because the splash screens varied on many different characteristics. A future study could focus on some key characteristics, such as the presence of a free sample, countdown timer and/or the distinction between positive and negative language. Hypothetical splash screens could then be designed that vary only on these key dimensions. This would allow us to directly identify the characteristics that have the most influence on willingness to pay.
It would be particularly interesting to consider a choice experiment that focuses specifically on gain versus loss framing. While our set of splash screens contained some examples with positive language (consistent with a gain frame) and some with negative language (consistent with a loss frame), the frames differed along other dimensions, such as text colour, background colour, logo, etc. A future experiment could fix all elements of the splash screen except for whether the text talks of regaining access to files or losing files. This would allow a more direct test of the influence of the gain–loss framing.
We remind readers that the objective of our work is to inform policy makers and law enforcement in identifying ransomware strains that are likely to be most socially damaging. In doing so, it is important to factor in the importance of the ransom amount, because willingness to pay is likely influenced by a combination of framing and ransom size. In our study, we fixed the ransom demand at GBP 300. A further extension of our experimental design would therefore be to systematically vary the ransom amount (e.g., GBP 300 and GBP 600) alongside the framing of the ransom demand. This would help identify ransom frames that could be particularly socially damaging because they induce victims to pay a large ransom.
It would also be of interest to more directly test the relationship between trust and willingness to pay. We found that trust was strongly correlated with willingness to pay but could not judge causality. In order to see if trust influences willingness to pay (and not the other way around), an experiment could be designed that systematically manipulates trust. For instance, participants could be primed before seeing the splash screen. One set of participants could be given information that ‘some victims recover access to their files’, and another set that ‘some victims do not recover access to their files’. This should manipulate trust and the impact on willingness to pay could then be measured.

7. Conclusions

Ransomware is a major threat to modern economies. While ransomware is currently targeted primarily at organizations there is no reason why it could not return to being a threat widely aimed at individuals. It is important for policymakers and law enforcement to be ahead of the game and anticipate this evolution. In this paper, we used insights from behavioural economics and evidence from an experiment to analyse how the framing of a ransom demand may influence a victim’s willingness to pay the ransom. This can potentially help predict which ransomware strains are likely to be ‘most successful’ in raising revenue for criminals and therefore pose a bigger threat.
We found that the main determinant of willingness to pay was trust that access to the files would be restored. Correlation does not imply causation, but it seems reasonable to conjecture that if victims trust their files will be returned, they are more willing to pay the ransom demand. We should expect, therefore, that criminals will look to signal a reputation for being trustworthy, whether that be free samples or maintaining a reputation for honouring promises. From the point of view of law enforcement, our results suggest a need to disrupt trust in order to undermine the ransomware business model. If the criminals typically do not return access to files then it is seemingly possible to undermine confidence through targeted public and business awareness campaigns and advice. If, however, the criminals typically do return access to files, then there are ethical questions about ‘misleading’ information. Indeed, better information may actually increase trust. Thus, this is a potentially complex issue, as illustrated by an FBI agent being quoted in 2015 as saying ‘The ransomware is that good … To be honest, we often advise people to just pay the ransom’ [70].
Alongside trust, we found that the helpfulness of the ransom splash screens also positively correlated with willingness to pay, potentially through its effect on trust. A professionally designed website, promise of free trial, frequently asked questions, and clear instructions can all increase the ‘helpfulness’ of the ransom demand. Interestingly, we also found that a positively framed ransom demand appeared more effective than a negatively framed one. In particular, subjects seemed to appreciate a professional looking website and the impression they were purchasing decryption software. They were less amenable to threats and a framing that did not hide that the money was going to criminals. The ransomware business model could therefore be further disrupted by making victims more aware of criminal tactics. Social engineering tactics can be used to scare victims but also to ‘reassure’ and ‘befriend’ victims. Cybersecurity awareness campaigns should reflect both styles of tactic.

Author Contributions

Conceptualization, A.C. and E.C.; methodology, A.C., E.C. and L.X.; formal analysis, A.C., E.C. and L.X.; data curation, L.X.; writing—original draft preparation, E.C.; writing—review and editing, A.C. and E.C.; visualization, A.C. and L.X.; project administration, A.C. and E.C.; funding acquisition, A.C. and E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was in part funded by the Engineering and Physical Sciences Research Council (EPSRC) grant number EP/P011772/1.

Institutional Review Board Statement

This project was approved by the Ethics Review Committee of the Faculty of Business and Law at De Montfort University(Project identification code "Ransomware screenshots and social engineering") on 9th January 2019.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data from this study will be made available on the DMU FigShare repository.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Summary of Comments on Features That Determined Highest Rank and Lowest Rank

WTP/Trust/Helpful/Fast/Positivity (X for Least WTP)
CryptoWall
(Most willing to pay)
Something is free and able to see if it is a scam.
Doesn’t give you much time before the ransom is doubled.
Clear layout, professional with helpful steps on how to pay.
IP address makes the ransom more serious, makes you feel threatened.
CryptoLocker  
(Most willing to pay)
Negative and serious language, company feel so makes it seem more valid,
threat at the end increases the likelihood that I pay.
Time limit, red warning, bold text to highlight, red background.
Threat of a short deadline.
Design of the warning.
Information provided on how the ransom is written.
WannaCry  
(Most willing to pay)
Red colour makes me feel it’s emergency document.
Most information about consequences, time limit.
Option to decrypt some files for free, indicated they would decrypt all for payment.  
(Most trustful)
Countdown adds pressure, information about file decryption.
Guarantee to return my files, can decrypt one for free.
Description of how to get files back.
Looks legitimate, not posed as a threat, friendly tone, more of a solution.
Informative details regarding the ransom payment.  
(Most helpful)
Helpfully laid out, clear, explains situation and steps to rectify.
Could decrypt some files, extra information.  
(Fastest)
Time limit puts pressure on the user, price increase provides incentive to decide
quickly.  
(Most positive after recovery)
Professional scammer, could happen to anyone.
CTBLocker  
(Most willing to pay)
Double price from £300 to £600.
Timer, warning about trying to take off software myself, promise to return files.
Get one file for free, direct access to the key.
Red colour looks like virus, threatening words. X
Looks like a fake pop up which comes up often. X
Looks fake and like a Windows XP virus. X
Dark colours don’t look professional. X
TorrentLocker  
(Most willing to pay)
Simple design, explains what happened and what to do.
It seems legitimate, with FAQs.
If no time limit I could try and get external help first. X
Cerber  
(Most willing to pay)
Poor and unprofessional layout, not clear or concise instructions. X
Doesn’t provide any valid or substantial information. X
Design shows the encrypters are amateurs, think of other solutions before paying. X
Looks simple like a pop up from a computer. X
Petya  
(Most willing to pay)
Black and pink colouring makes it look fake. X
Uses html file and looks less professional. X
Colour, lots of words, dull, not easy on the eye. X
Complicated description and process. X
Patronizing, email address is suspicious. X  
(Least trustful)
Unprofessional style and presentation. X
Uses html file and looks less professional. X
Vague about payment process. X
There wasn’t much to reassure me of the safe return of the files, doesn’t
seem authentic. X
Look like amateurs, or threatening rather than helping. X  
(Least helpful)
No information provided, only that there is a special offer. X
Provides useless information and is impolite. X  
(Least positive after recovery)
Seems most criminal, paying through email feels like you’re directly paying the
criminal. X
Locky  
(Most willing to pay)
Warnings look least genuine. X
Too complicated, full of codes and words. X
Spelling mistakes, lack of information. X
Too simple. X
A lot of different webpages and potentially more viruses. X
Doesn’t look real, and it’s on notepad. X
Not double price. X  
(Least fast)
No threat of deadline reduces urgency. X
Not giving much information leads me to believe it’s not real. X
Note: All the comments are from the 6 ranking tasks (rank 1 and 8). e.g., ‘What features in the example that determined your choice of most likely to pay?’

Appendix B. Experiment Instructions

  • Questionnaire on ransomware
  • Background
Ransomware is a type of malware (i.e., computer virus) that has become increasingly prominent in recent years. The basics of a ransomware attack are as follows:
  • The malware encrypts the files on your computer, laptop or similar. It encrypts documents, e.g., word or excel files, as well as photographs and videos.
  • The victim is then asked to pay a ransom in order to regain access to their files. The criminals promise to provide the key to un-encrypt the files.
  • If the malware is technically well designed (and many forms of ransomware now are), and if the victim has no backup, then the files can only be recovered by getting the key off the criminals. There is no other option.
  • If the victim pays the ransom then they may or may not get the key to the files. Some criminals do provide the key and the victim regains access to their files. Some criminals do not provide the key and the files are lost.
  • Your Task
We are now going to provide you with 8 different examples of a ransom demand. These examples are fictitious but based on genuine variants that have been used by criminals. In each example the criminals are asking for a ransom of £300 to return access to your files.
We want to imagine that you had on your computer some files that are very valuable and for which you do not have a backup. For instance, you have important work and sentimental photos. You would, without any hesitation, be willing to pay £300 to, say, a friend or computer expert if they could recover your files. But your choice is whether to pay the ransom to the criminals. For each ransomware example we want you to rate your attitudes to 6 different questions:
  • How likely would you be to pay the ransom?
  • How likely do you think it is that the criminals will provide the key to decrypt your files?
  • How fast do you think it is that you would make a decision?
  • To what extent would this ransom demand make you feel angry?
  • How helpful is the ransom demand in informing you about what has happened and what to do about it?
  • If you ultimately get your files back, how positive you would feel about paying the ransom?
At the end of the experiment, we will ask you to rank the 8 cases in terms of 6 the aspects.
  • Rating Task Example
Figure A1. An example of Cerber screenshot and experiment questions.
Figure A1. An example of Cerber screenshot and experiment questions.
Jcp 05 00069 g0a1
  • Ranking Task Example
Figure A2. An example of a ranking task.
Figure A2. An example of a ranking task.
Jcp 05 00069 g0a2
  • Other Splash Screens
Figure A3. An example of CryptoLocker screenshot.
Figure A3. An example of CryptoLocker screenshot.
Jcp 05 00069 g0a3
Figure A4. An example of WannaCry screenshot.
Figure A4. An example of WannaCry screenshot.
Jcp 05 00069 g0a4
Figure A5. An example of CTBLocker screenshot.
Figure A5. An example of CTBLocker screenshot.
Jcp 05 00069 g0a5
Figure A6. An example of CryptoWall screenshot.
Figure A6. An example of CryptoWall screenshot.
Jcp 05 00069 g0a6
Figure A7. An example of Petya screenshot.
Figure A7. An example of Petya screenshot.
Jcp 05 00069 g0a7
Figure A8. An example of Locky screenshot.
Figure A8. An example of Locky screenshot.
Jcp 05 00069 g0a8
Figure A9. An example of TorrentLocker screenshot.
Figure A9. An example of TorrentLocker screenshot.
Jcp 05 00069 g0a9

References

  1. Kalaimannan, E.; John, S.K.; DuBose, T.; Pinto, A. Influences on ransomware’s evolution and predictions for the future challenges. J. Cyber Secur. Technol. 2017, 1, 23–31. [Google Scholar] [CrossRef]
  2. Kok, S.; Abdullah, A.; Jhanjhi, N.; Supramaniam, M. Ransomware, threat and detection techniques: A review. Int. J. Comput. Sci. Netw. Secur. 2019, 19, 136. [Google Scholar]
  3. Beaman, C.; Barkworth, A.; Akande, T.D.; Hakak, S.; Khan, M.K. Ransomware: Recent advances, analysis, challenges and future research directions. Comput. Secur. 2021, 111, 102490. [Google Scholar] [CrossRef]
  4. Oz, H.; Aris, A.; Levi, A.; Uluagac, A.S. A survey on ransomware: Evolution, taxonomy, and defense solutions. ACM Comput. Surv. 2022, 54, 1–37. [Google Scholar] [CrossRef]
  5. Razaulla, S.; Fachkha, C.; Markarian, C.; Gawanmeh, A.; Mansoor, W.; Fung, B.C.; Assi, C. The age of ransomware: A survey on the evolution, taxonomy, and research directions. IEEE Access 2023, 11, 40698–40723. [Google Scholar] [CrossRef]
  6. A Flawed Ransomware Encryptor. 2015. Available online: https://securelist.com/a-flawed-ransomware-encryptor/69481/ (accessed on 28 August 2025).
  7. The Rise of Low Quality Ransomware G-DATA Security Blog. 2016. Available online: https://www.gdatasoftware.com/blog/2016/09/29157-the-rise-of-low-quality-ransomware (accessed on 28 August 2025).
  8. Kharraz, A.; Robertson, W.; Balzarotti, D.; Bilge, L.; Kirda, E. Cutting the gordian knot: A look under the hood of ransomware attacks. In Proceedings of the International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment, Graz, Austria, 17–19 July 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–24. [Google Scholar]
  9. Ruellan, E.; Paquet-Clouston, M.; Garcia, S. Conti Inc.: Understanding the internal discussions of a large ransomware-as-a-service operator with machine learning. Crime Sci. 2024, 13, 16. [Google Scholar] [CrossRef]
  10. Hernandez-Castro, J.; Cartwright, A.; Cartwright, E. An economic analysis of ransomware and its welfare consequences. R. Soc. Open Sci. 2020, 7, 190023. [Google Scholar] [CrossRef]
  11. Jarvis, K. Cryptolocker ransomware. Viitattu 2013, 20, 2014. [Google Scholar]
  12. Liao, K.; Zhao, Z.; Doupé, A.; Ahn, G.J. Behind closed doors: Measurement and analysis of CryptoLocker ransoms in Bitcoin. In Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), Toronto, ON, Canada, 1–3 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–13. [Google Scholar]
  13. Spagnuolo, M.; Maggi, F.; Zanero, S. Bitiodine: Extracting intelligence from the bitcoin network. In Proceedings of the International Conference on Financial Cryptography and Data Security, Christ Church, Barbados, 3–7 March 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 457–468. [Google Scholar]
  14. Connolly, L.Y.; Wall, D.S. The rise of crypto-ransomware in a changing cybercrime landscape: Taxonomising countermeasures. Comput. Secur. 2019, 87, 101568. [Google Scholar] [CrossRef]
  15. Connolly, L.Y.; Wall, D.S.; Lang, M.; Oddson, B. An empirical study of ransomware attacks on organizations: An assessment of severity and salient factors affecting vulnerability. J. Cybersecur. 2020, 6, tyaa023. [Google Scholar] [CrossRef]
  16. Mott, G.; Turner, S.; Nurse, J.R.; MacColl, J.; Sullivan, J.; Cartwright, A.; Cartwright, E. Between a rock and a hard (ening) place: Cyber insurance in the ransomware era. Comput. Secur. 2023, 128, 103162. [Google Scholar] [CrossRef]
  17. Tversky, A.; Kahneman, D. The framing of decisions and the psychology of choice. Science 1981, 211, 453–458. [Google Scholar] [CrossRef]
  18. Barberis, N.C. Thirty years of prospect theory in economics: A review and assessment. J. Econ. Perspect. 2013, 27, 173–196. [Google Scholar] [CrossRef]
  19. Everett, C. Ransomware: To pay or not to pay? Comput. Fraud Secur. 2016, 2016, 8–12. [Google Scholar] [CrossRef]
  20. Halikias, H. The Three Cs of Ransomware. In Digital Shakedown: The Complete Guide to Understanding and Combating Ransomware; Springer: Berlin/Heidelberg, Germany, 2024; pp. 11–24. [Google Scholar]
  21. Exploring the Psychological Mechanisms Used in Ransomware Splash Screens. Sentin. One Rep. 2017. Available online: https://www.sentinelone.com/blog/exploring-psychological-mechanisms-used-ransomware-splash-screens/ (accessed on 28 August 2025).
  22. Kühberger, A. The influence of framing on risky decisions: A meta-analysis. Organ. Behav. Hum. Decis. Process. 1998, 75, 23–55. [Google Scholar] [CrossRef]
  23. Rodríguez-Priego, N.; Van Bavel, R.; Vila, J.; Briggs, P. Framing effects on article security behavior. Front. Psychol. 2020, 11, 527886. [Google Scholar] [CrossRef] [PubMed]
  24. Laszka, A.; Farhang, S.; Grossklags, J. On the economics of ransomware. In Proceedings of the International Conference on Decision and Game Theory for Security; Springer: Berlin/Heidelberg, Germany, 2017; pp. 397–417. [Google Scholar]
  25. Li, Z.; Liao, Q. Game theory of data-selling ransomware. J. Cyber Secur. Mobil. 2021, 10, 65–96. [Google Scholar] [CrossRef]
  26. Zhang, C.; Luo, F.; Ranzi, G. Multistage Game Theoretical Approach for Ransomware Attack and Defense. IEEE Trans. Serv. Comput. 2022, 16, 2800–2811. [Google Scholar] [CrossRef]
  27. Yin, T.; Sarabi, A.; Liu, M. Deterrence, backup, or insurance: Game-theoretic modeling of ransomware. Games 2023, 14, 20. [Google Scholar] [CrossRef]
  28. Arce, D.; Woods, D.W.; Böhme, R. Economics of incident response panels in cyber insurance. Comput. Secur. 2024, 140, 103742. [Google Scholar] [CrossRef]
  29. Cartwright, E.; Hernandez Castro, J.; Cartwright, A. To pay or not: Game theoretic models of ransomware. J. Cybersecur. 2019, 5, tyz009. [Google Scholar] [CrossRef]
  30. Connolly, A.Y.; Borrion, H. Reducing ransomware crime: Analysis of victims’ payment decisions. Comput. Secur. 2022, 119, 102760. [Google Scholar] [CrossRef]
  31. Mott, G.; Turner, S.; Nurse, J.R.; Pattnaik, N.; MacColl, J.; Huesch, P.; Sullivan, J. ‘There was a bit of PTSD every time I walked through the office door’: Ransomware harms and the factors that influence the victim organization’s experience. J. Cybersecur. 2024, 10, tyae013. [Google Scholar] [CrossRef]
  32. McIntyre, D.L.; Frank, R. No Gambles with Information Security: The Victim Psychology of a Ransomware Attack. In Cybercrime in Context: The Human Factor in Victimization, Offending, and Policing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 43–60. [Google Scholar]
  33. Cartwright, A.; Cartwright, E.; Xue, L. Investing in prevention or paying for recovery-attitudes to cyber risk. In Proceedings of the International Conference on Decision and Game Theory for Security, Stockholm, Sweden, 30 October–1 November 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 135–151. [Google Scholar]
  34. Yilmaz, Y.; Cetin, O.; Arief, B.; Hernandez-Castro, J. Investigating the impact of ransomware splash screens. J. Inf. Secur. Appl. 2021, 61, 102934. [Google Scholar] [CrossRef]
  35. Arief, B.; Periam, A.; Cetin, O.; Hernandez-Castro, J.C. Using eyetracker to find ways to mitigate ransomware. In Proceedings of the 6th International Conference on Information Systems Security and Privacy (ICISSP 2020), Valletta, Malta, 25–27 February 2020. [Google Scholar]
  36. Sharma, K.; Zhan, X.; Nah, F.F.H.; Siau, K.; Cheng, M.X. Impact of digital nudging on information security behavior: An experimental study on framing and priming in cybersecurity. Organ. Cybersecur. J. Pract. Process People 2021, 1, 69–91. [Google Scholar] [CrossRef]
  37. Plachkinova, M.; Menard, P. An examination of gain-and loss-framed messaging on smart home security training programs. Inf. Syst. Front. 2022, 24, 1395–1416. [Google Scholar] [CrossRef]
  38. Li, Z.; Liao, Q. Preventive portfolio against data-selling ransomware—A game theory of encryption and deception. Comput. Secur. 2022, 116, 102644. [Google Scholar] [CrossRef]
  39. Cartwright, A.; Cartwright, E.; MacColl, J.; Mott, G.; Turner, S.; Sullivan, J.; Nurse, J.R. How cyber insurance influences the ransomware payment decision: Theory and evidence. Geneva Pap. Risk Insur.-Issues Pract. 2023, 48, 300–331. [Google Scholar] [CrossRef]
  40. Meurs, T.; Cartwright, E.; Cartwright, A.; Junger, M.; Abhishta, A. Deception in double extortion ransomware attacks: An analysis of profitability and credibility. Comput. Secur. 2024, 138, 103670. [Google Scholar] [CrossRef]
  41. Caporusso, N.; Chea, S.; Abukhaled, R. A game-theoretical model of ransomware. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Orlando, FL, USA, 21–25 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 69–78. [Google Scholar]
  42. Cartwright, A.; Cartwright, E.; Xue, L.; Hernandez-Castro, J. An investigation of individual willingness to pay ransomware. J. Financ. Crime 2023, 30, 728–741. [Google Scholar] [CrossRef]
  43. Bekkers, L.; van’t Hoff-De Goede, S.; Misana-ter Huurne, E.; van Houten, Y.; Spithoven, R.; Leukfeldt, E.R. Protecting your business against ransomware attacks? Explaining the motivations of entrepreneurs to take future protective measures against cybercrimes using an extended protection motivation theory model. Comput. Secur. 2023, 127, 103099. [Google Scholar] [CrossRef]
  44. Kahneman, D.; Tversky, A. Prospect Theory: An Analysis of Decision under Risk. Econometrica 1979, 47, 263–292. [Google Scholar] [CrossRef]
  45. Kahneman, D.; Knetsch, J.L.; Thaler, R.H. Anomalies: The endowment effect, loss aversion, and status quo bias. J. Econ. Perspect. 1991, 5, 193–206. [Google Scholar] [CrossRef]
  46. Camerer, C. Three cheers—Psychological, theoretical, empirical—For loss aversion. J. Mark. Res. 2005, 42, 129–133. [Google Scholar] [CrossRef]
  47. Gächter, S.; Johnson, E.J.; Herrmann, A. Individual-level loss aversion in riskless and risky choices. Theory Decis. 2022, 92, 599–624. [Google Scholar] [CrossRef]
  48. Brown, A.L.; Imai, T.; Vieider, F.M.; Camerer, C.F. Meta-analysis of empirical estimates of loss aversion. J. Econ. Lit. 2024, 62, 485–516. [Google Scholar] [CrossRef]
  49. Bateman, I.; Munro, A.; Rhodes, B.; Starmer, C.; Sugden, R. A test of the theory of reference-dependent preferences. Q. J. Econ. 1997, 112, 479–505. [Google Scholar] [CrossRef]
  50. Köszegi, B.; Rabin, M. A model of reference-dependent preferences. Q. J. Econ. 2006, 121, 1133–1165. [Google Scholar] [PubMed]
  51. De Dreu, C.K.; McCusker, C. Gain–loss frames and cooperation in two-person social dilemmas: A transformational analysis. J. Personal. Soc. Psychol. 1997, 72, 1093. [Google Scholar] [CrossRef]
  52. Kern, M.C.; Chugh, D. Bounded ethicality: The perils of loss framing. Psychol. Sci. 2009, 20, 378–384. [Google Scholar] [CrossRef]
  53. Nabi, R.L.; Walter, N.; Oshidary, N.; Endacott, C.G.; Love-Nichols, J.; Lew, Z.; Aune, A. Can emotions capture the elusive gain-loss framing effect? A meta-analysis. Commun. Res. 2020, 47, 1107–1130. [Google Scholar] [CrossRef]
  54. Connelly, B.L.; Certo, S.T.; Ireland, R.D.; Reutzel, C.R. Signaling theory: A review and assessment. J. Manag. 2011, 37, 39–67. [Google Scholar] [CrossRef]
  55. Karimov, F.P.; Brengman, M.; Van Hove, L. The effect of website design dimensions on initial trust: A synthesis of the empirical literature. J. Electron. Commer. Res. 2011, 12. [Google Scholar]
  56. Wells, J.D.; Valacich, J.S.; Hess, T.J. What signal are you sending? How website quality influences perceptions of product quality and purchase intentions. Mis Q. 2011, 35, 373–396. [Google Scholar] [CrossRef]
  57. Aakash, A.; Aggarwal, A.G. Role of EWOM, product satisfaction, and website quality on customer repurchase intention. In Strategy and Superior Performance of Micro and Small Businesses in Volatile Economies; IGI Global: Hershey, PA, USA, 2019; pp. 144–168. [Google Scholar]
  58. Shaw Brown, C.; Sulzer-Azaroff, B. An assessment of the relationship between customer satisfaction and service friendliness. J. Organ. Behav. Manag. 1994, 14, 55–76. [Google Scholar] [CrossRef]
  59. Tsai, W.C.; Huang, Y.M. Mechanisms linking employee affective delivery and customer behavioral intentions. J. Appl. Psychol. 2002, 87, 1001. [Google Scholar] [CrossRef] [PubMed]
  60. El-Ebiary, Y.A.B.; Pathmanathan, P.R.; Tarshany, Y.M.A.; Jusoh, J.A.; Aseh, K.; Al Moaiad, Y.; Al-Kofahi, M.; Pande, B.; Bamansoor, S. Determinants of Customer Purchase Intention Using Zalora Mobile Commerce Application. In Proceedings of the 2021 2nd International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Cameron Highlands, Malaysia, 15–17 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 159–163. [Google Scholar]
  61. Kahneman, D.; Tversky, A. Choices, values, and frames. Am. Psychol. 1984, 39, 341. [Google Scholar] [CrossRef]
  62. Tversky, A.; Kahneman, D. Loss aversion in riskless choice: A reference-dependent model. Q. J. Econ. 1991, 106, 1039–1061. [Google Scholar] [CrossRef]
  63. Loewenstein, G.; Prelec, D. Anomalies in intertemporal choice: Evidence and an interpretation. Q. J. Econ. 1992, 107, 573–597. [Google Scholar] [CrossRef]
  64. Chandon, P.; Hutchinson, J.W.; Bradlow, E.T.; Young, S.H. Does in-store marketing work? Effects of the number and position of shelf facings on brand attention and evaluation at the point of purchase. J. Mark. 2009, 73, 1–17. [Google Scholar] [CrossRef]
  65. Reutskaja, E.; Nagel, R.; Camerer, C.F.; Rangel, A. Search dynamics in consumer choice under time pressure: An eye-tracking study. Am. Econ. Rev. 2011, 101, 900–926. [Google Scholar] [CrossRef]
  66. Cartwright, A.; Cartwright, E. Ransomware and reputation. Games 2019, 10, 26. [Google Scholar] [CrossRef]
  67. Exadaktylos, F.; Espín, A.M.; Branas-Garza, P. Experimental subjects are not different. Sci. Rep. 2013, 3, 1213. [Google Scholar] [CrossRef] [PubMed]
  68. Druckman, J.N.; Kam, C.D. Students as experimental participants. Camb. Handb. Exp. Political Sci. 2011, 1, 41–57. [Google Scholar]
  69. Blake, D.; Cannon, E.; Wright, D. Quantifying loss aversion: Evidence from a UK population survey. J. Risk Uncertain. 2021, 63, 27–57. [Google Scholar] [CrossRef]
  70. Zorabedian, J. Did the FBI Really Say “Pay Up” for Ransomware? Here’s What to Do… 2015. Available online: https://news.sophos.com/en-us/2015/10/28/did-the-fbi-really-say-pay-up-for-ransomware-heres-what-to-do/ (accessed on 28 August 2025).
Figure 1. Explanation of differences between gain and loss frame in terms of splash screen, victim thought process and model outcomes.
Figure 1. Explanation of differences between gain and loss frame in terms of splash screen, victim thought process and model outcomes.
Jcp 05 00069 g001
Figure 2. The maximum ransom under a gain, R G , and loss frame, R L , and the difference in p needed for R G = R L .
Figure 2. The maximum ransom under a gain, R G , and loss frame, R L , and the difference in p needed for R G = R L .
Jcp 05 00069 g002
Figure 3. An example of a rating task for the Cerber splash-screen example.
Figure 3. An example of a rating task for the Cerber splash-screen example.
Jcp 05 00069 g003
Figure 4. An example of a ranking task.
Figure 4. An example of a ranking task.
Jcp 05 00069 g004
Figure 5. Radar plot of WTP and trust across ransomware splash screens. Note: The graph presents the average ratings of WTP and trust across the 8 ransomware examples. Rating levels from centre to outside clusters are 0, 2, 4 and 6.
Figure 5. Radar plot of WTP and trust across ransomware splash screens. Note: The graph presents the average ratings of WTP and trust across the 8 ransomware examples. Rating levels from centre to outside clusters are 0, 2, 4 and 6.
Jcp 05 00069 g005
Table 1. Characteristics of the 8 splash screens.
Table 1. Characteristics of the 8 splash screens.
CharacteristicCryptoWallCryptoLockerWannaCryCTBLockerTorrentLockerCerberPetyaLocky
1. Words1211351781531016810694
2. SoftwareYNNNYYNY
3. TimerYYYYNYNN
4. Price riseYNYNNYNN
5. Free sampleYNYNNYNN
6. How to pay31101030
7. Encryption02011003
8. Files31010000
9. Positive10102020
10. Negative04240010
11. Text colourBlackBlackBlackRedBlueBlackPinkBlack
12. BackgroundBlueRedRedBlackWhiteWhiteBlackWhite
13. LogoNYYNNNNN
      Total Score *1110865464
* Note: Total score was calculated by adding up 10 out of the 13 features in the table, excluding word count, text colour and background colour. Specifically, it was calculated by adding up the top 10 rows in the table, where ‘Y’ is coded as one and ‘N’ is coded as zero.
Table 2. Average rating of experiment questions across ransomware splash screens.
Table 2. Average rating of experiment questions across ransomware splash screens.
RansomwareWTPTrustFastAngerHelpfulPositivity
CryptoWall5.595.406.227.895.565.33
CryptoLocker5.374.876.227.964.874.99
WannaCry5.164.885.837.755.305.28
CTBLocker5.114.365.887.985.015.12
TorrentLocker4.674.665.187.745.255.01
Cerber4.454.106.007.904.114.92
Petya4.033.705.588.194.314.65
Locky4.013.975.447.654.574.94
Mean4.804.495.797.884.875.03
The table reports the average rating of the ransomware splash screen on the six experiment questions. All questions are on 10-item Likert scale. A rating of one represents Least likely to pay, least likely to provide, least fast, least anger, least helpful and least positive feelings conditional on recovery, respectively, for the six questions.
Table 3. Average ranking across the ransomware splash screens.
Table 3. Average ranking across the ransomware splash screens.
RansomwareWTPTrustFastAngryHelpfulPositivity
CryptoWall3.133.393.415.343.123.62
CryptoLocker3.183.863.173.994.113.88
WannaCry3.343.223.094.143.063.55
CTBLocker4.304.824.603.794.654.90
TorrentLocker4.523.825.235.353.513.94
Cerber5.095.274.154.065.804.90
Locky6.025.706.194.785.695.32
Petya6.425.926.154.556.075.89
Observations888886858982
The table reports the average ranking of the eight ransomware splash screens across the six experiment questions. The ranking is from one to eight. A ranking of one represents most likely to pay, mostly likely to return, most quickly, most angry, most helpful and most positive, respectively, for the six dimensions.
Table 4. Linear regression for each of the 6 measured aspects.
Table 4. Linear regression for each of the 6 measured aspects.
(1)(2)(3)(4)(5)(6)
Dependent var.WTPTrustFastAngerHelpfulPositivity
Trust0.531 *** 0.00364−0.07350.241 ***0.108
(0.0757) (0.0809)(0.0688)(0.0728)(0.103)
Fast0.01960.00216 −0.148 **0.002850.0914
(0.0767)(0.0480) (0.0693)(0.0800)(0.0857)
Anger0.116−0.0626−0.212 ** −0.0845−0.0483
(0.0776)(0.0601)(0.0837) (0.0799)(0.0929)
Helpful0.129 *0.144 ***0.00288−0.0596 −0.00161
(0.0771)(0.0437)(0.0809)(0.0539) (0.0792)
Positivity0.241 ***0.06580.0939−0.0345−0.00164
(0.0869)(0.0599)(0.0890)(0.0672)(0.0804)
WTP 0.402 ***0.02500.1030.162 *0.299 ***
(0.0563)(0.0976)(0.0691)(0.0899)(0.101)
Constant−0.4482.000 ***6.846 ***9.04 5***3.673 ***2.975 ***
(0.817)(0.570)(0.852)(0.419)(0.933)(0.862)
Observations738738738738738738
R−squared0.3700.3340.0460.0490.1200.152
Linear regression with standard errors robust for clustering at subject level. Robust standard errors are in parentheses, *** p < 0.01 , ** p < 0.05 , * p < 0.1 . The unit observation is each respondent’s rating in terms of the six aspects. In total, we have 738 out of 744 observations due to missing values. Dependent variables of each specification is in the top row of the table.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cartwright, E.; Cartwright, A.; Xue, L. Ransomware Splash Screens, Loss Aversion and Trust: Insights from Behavioral Economics. J. Cybersecur. Priv. 2025, 5, 69. https://doi.org/10.3390/jcp5030069

AMA Style

Cartwright E, Cartwright A, Xue L. Ransomware Splash Screens, Loss Aversion and Trust: Insights from Behavioral Economics. Journal of Cybersecurity and Privacy. 2025; 5(3):69. https://doi.org/10.3390/jcp5030069

Chicago/Turabian Style

Cartwright, Edward, Anna Cartwright, and Lian Xue. 2025. "Ransomware Splash Screens, Loss Aversion and Trust: Insights from Behavioral Economics" Journal of Cybersecurity and Privacy 5, no. 3: 69. https://doi.org/10.3390/jcp5030069

APA Style

Cartwright, E., Cartwright, A., & Xue, L. (2025). Ransomware Splash Screens, Loss Aversion and Trust: Insights from Behavioral Economics. Journal of Cybersecurity and Privacy, 5(3), 69. https://doi.org/10.3390/jcp5030069

Article Metrics

Back to TopTop