Next Article in Journal
The Effect of Innovation on Climate Resilience in Developing Countries: Evidence from a Panel Quantile Regression Approach
Previous Article in Journal
Climate Policy Uncertainty and Corporate Innovation Investment: Evidence from China
Previous Article in Special Issue
Do Shiller Macro and Micro Narratives Characterize the S&P 500 Index Returns? New Insights
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Revealing Risk Preferences Through AI Prompting Effort

by
Brian A. Toney
1,*,
Gregory G. Lubiani
2 and
Albert A. Okunade
3
1
College of Agricultural Sciences & Natural Resources, East Texas A&M University, 2600 W. Neal St., Commerce, TX 75428, USA
2
Department of Accounting, Finance, Economics, & Business Law, East Texas A&M University, 2600 W. Neal St., Commerce, TX 75428, USA
3
Department of Economics, The University of Memphis, Memphis, TN 38152, USA
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2026, 19(4), 269; https://doi.org/10.3390/jrfm19040269
Submission received: 7 March 2026 / Revised: 4 April 2026 / Accepted: 6 April 2026 / Published: 8 April 2026

Abstract

This paper analyzes “prompt engineering” through the economic lens of self-insurance against the risk of errors from noisy AI systems. To formalize this approach, we model an agent under cognitive load, allocating effort between working unassisted and prompting an AI assistant. The theoretical model demonstrates that an agent’s optimal prompting effort is driven by the agent’s attitude toward risk. Specifically, the model proves that risk-averse agents rationally “over-invest” in prompting effort, while risk-seeking agents “under-invest” relative to the risk-neutral benchmark. This outcome stems from the covariance between the marginal utility of performance and the marginal product of prompting. This alignment is positive for risk-averse agents, effectively boosting the AI’s perceived productivity. The novel implication is that prompting effort is an economically meaningful behavior that can be informative about an individual’s underlying attitude toward downside AI risk. These results offer a new perspective for understanding heterogeneity in AI adoption and oversight. They also suggest that, under comparable task conditions and controlling for prompting ability, observed prompting effort may be informative about attitudes toward downside AI risk. The framework therefore provides a risk-management perspective for understanding heterogeneity in AI governance in high-stakes settings such as healthcare and finance.

1. Introduction

Artificial Intelligence (AI) is poised to substantially increase productivity across virtually all sectors of the global economy, affecting professions ranging from medicine and law to engineering and economics (Brynjolfsson et al., 2025; Noy & Zhang, 2023). In high-stakes industries such as healthcare, which constitutes nearly 18 percent of U.S. GDP (Dumont et al., 2015), AI systems are increasingly embedded in diagnostic, screening, and treatment workflows (Chen et al., 2025; Flora & Maniago, 2025; Starke, 2025). While these technologies promise efficiency gains and cost reductions, they also introduce material operational risk because stochastic model errors can generate severe downstream consequences. However, as AI becomes increasingly integrated into professional workflows, more information and guidance is needed to ensure that individuals optimally leverage AI when cognitive resources are scarce. Existing studies provide valuable qualitative guidance on prompt engineering (Dell’Acqua et al., 2026; Jahani et al., 2024; Sahoo et al., 2024; Zamfirescu-Pereira et al., 2023) and quantitative analyses of technology adoption in general settings (Autor et al., 1998, 2003; Comin & Hobijn, 2010; Li & Peter, 2021). However, previous work either remains qualitative or only considers users’ endogenous mistakes. This area of literature leaves unexplained the role of the exogenous stochastic errors that are intrinsic to contemporary large language models (Vaswani et al., 2017), particularly in regulated environments where liability for algorithmic misclassification cannot be fully insured and remains with human decision makers. None of these studies endogenize the trade-off between effort spent producing and effort spent hedging against AI errors. As a result, there is no normative, micro-founded framework for how professionals should optimally divide effort between direct work and prompt refinement when AI performance is inherently stochastic.
In this paper we develop such a framework, treating prompt design as a risk-mitigating investment (Dionne & Eeckhoudt, 1985; Ehrlich & Becker, 1972) and showing how an agent’s risk preferences determine the optimal allocation of effort between prompting and direct production. Our central result (Theorem 1) is that, because prompting effort is a substitute for exogenous shocks to AI performance, risk-averse agents rationally “over-invest” in AI enhancement, risk-neutral agents invest until the expected marginal product equals that of unassisted task effort, and risk-seeking agents “under-invest.” Consequently, under the model’s assumptions and controlling for prompting ability, observed prompting effort can serve as an informative indicator of individual attitudes toward downside AI risk. We illustrate the framework’s mechanics in a simple Cobb–Douglas setting, where the closed-form comparative statics follow directly from Theorem 1. This example shows how departures from the risk-neutral benchmark map onto risk aversion, neutrality, or risk-seeking preferences within the model, thereby clarifying the conditions under which prompting behavior can be informative about underlying risk preferences.
Our framework rationalizes empirical results that users increase monitoring after witnessing an AI error (Dietvorst et al., 2015, 2018; Lenskjold et al., 2023), as a negative realized shock in AI performance raises the marginal product of a follow-up, corrective prompt. This mechanism connects the growing “algorithm-aversion” literature in human–AI interaction with classic models of self-insurance and self-protection in risk (Dionne & Eeckhoudt, 1985; Ehrlich & Becker, 1972), showing that precautionary prompting is a cognitive form of self-protection, analogous to physical risk-mitigation behaviors like wearing a seatbelt or purchasing antivirus software. A unique feature of our decision environment distinguishing it from previous works involving self-protection, however, involves the lack of an insurance market to protect against the errors made by noisy AI systems. For instance, a lawyer who relies on GPT to draft a legal brief faces reputational or disciplinary risk that cannot be offloaded through conventional insurance. Hence, in the absence of insurance markets protecting against errors involving AI, users of AI must self-insure to manage this downside risk to hedge against the AI’s inherent randomness. This market failure helps motivate Assumption 1 in the class of environments studied, rather than serving merely as a technical assumption for mathematical convenience.
In heavily regulated environments such as healthcare, law, and finance, the liability associated with algorithmic errors ultimately remains with human decision makers and the institutions that employ them. Clinical providers, attorneys, and financial managers cannot fully transfer the consequences of AI misclassification, biased recommendations, or model failure through conventional insurance mechanisms, particularly when harms arise from context-specific misuse or insufficient oversight (Goldberg et al., 2026). As a result, AI adoption in these settings introduces a form of non-diversifiable operational risk that must be managed internally. The effort devoted to monitoring, refining prompts, and verifying outputs therefore functions as a form of self-insurance against stochastic AI performance. In this sense, corrective prompting and oversight are not merely technical adjustments but economically meaningful protective investments undertaken to mitigate downside exposure in environments where reputational, financial, and regulatory consequences are material.
Although we frame the analysis in terms of ‘prompting’ a noisy AI assistant, the same risk-sensitive allocation logic applies whenever a decision maker chooses between (i) direct production effort and (ii) monitoring effort that reduces downside risk from exogenous errors in any supporting technology or personnel. For example, a principal investigator (PI) deciding how much to code personally versus writing detailed instructions and checks for a research assistant; a medical physician deciding how much to rely on the recommendation of their nursing staff versus double-checking test results themselves; or a manager trading off direct task work with overseeing the work completed by their administrative assistant. In all cases, Assumption 1 is intended to capture settings in which protective effort is most valuable in bad states, so risk-averse agents optimally allocate more to protection than the risk-neutral benchmark, while risk-seeking agents allocate less. This broader view enhances our understanding of heterogeneous adoption and oversight well beyond text prompting.
The remainder of the paper is organized as follows: Section 2 reviews the relevant literature in psychology, economics, and human–AI interaction. Section 3 presents the formal model of effort allocation. Section 4 walks through a simple Cobb–Douglas application of the model. Section 5 provides concluding remarks.

2. Literature Review

The concept of resource-limited cognition, which posits that humans have finite cognitive resources for processing information and performing tasks, provides the foundation for optimization of resources between humans and technology. Early research on dual-task interference framed performance limits in terms of serial processing constraints, suggesting that prioritizing processing for one task can impair performance on a concurrent second task (Telford, 1931; Vince, 1949; Welford, 1952). Seminal work by Norman and Bobrow (1975) differentiated between data-limited and resource-limited processes, arguing that performance is often constrained by the availability of cognitive resources, not just information. The influential model of attention and effort created by Kahneman (1973) further established that cognitive capacity is a limited resource that must be strategically allocated. The concept of “economy of the human-processing system” by Navon and Gopher (1979) is especially relevant to our model because it frames dual-task performance as an allocation problem of limited cognitive resources across interfering tasks according to their preferences. Later dual-task research expanded this early resource-limited framework by studying parallel processing (Fischer & Plessow, 2015), while more recent work has also investigated learning effects and error patterns (Hommel, 2020; Salvucci & Taatgen, 2008; Strobach et al., 2015).
Indeed, dual-task performance can be understood as a constrained optimization problem in which agents must decide how much cognitive effort to allocate to direct production versus technology-enhancing activities such as prompting and oversight. These studies provided the groundwork for understanding cognitive resources as analogous to other scarce resources, subject to allocation decisions that impact performance. This perspective is further reinforced by more recent psychological research exploring the implications of cognitive resource limitations in various domains (Engle, 2018). In the model presented here, this penalty on cognitive effort is captured by the Lagrange multiplier which serves as the shadow price of the agent’s cognitive budget constraint.
While the core principle of resource limitation is widely accepted, its precise implications for decision-making remain an active area of research. For instance, Deck and Jahedi (2015) demonstrated that individuals under high cognitive load tend to rely more on heuristics, indicating suboptimal resource allocation, while Hagger et al. (2010) showed that cognitive resource depletion can impair self-regulation. These results suggest that cognitive limitations can lead to deviations from purely rational behavior. Although alternative perspectives on cognitive resources exist (Inzlicht et al., 2021; Tuk et al., 2015), they do not negate the fundamental constraint of limited cognitive capacity, which underscores the inherent limitations of human cognition (Palma et al., 2018).
Complementing the psychological perspective, economic models have long recognized the importance of resource allocation under constraints. Becker (1965), modeling the allocation of time, highlighted the trade-off between labor and leisure, mirroring the tension between direct task execution and AI development in the model presented here. Modern economic theory, particularly the concept of opportunity cost, emphasizes constrained optimization as a central element of decision-making, with applications ranging from consumer choice to firm behavior (Varian, 1992). Behavioral economics further develops these principles by demonstrating how cognitive limitations influence economic choices (Thaler, 2016). For example, research on intertemporal choice highlights the challenges of resource allocation when decisions involve trade-offs between immediate and future benefits, a challenge central to the allocation of effort between direct tasks and AI development (Kim & Zauberman, 2019; Loewenstein, 1992).
Building on the foundational expected-utility framework of Von Neumann and Morgenstern (1947), our analysis expands upon the covariance approach to production under uncertainty. The idea of this approach is best understood by examining two primary areas of literature concerning how an agent’s effort interacts with risk. In the price uncertainty literature, effort is a complement to a positive shock in production: a higher market price increases the marginal return of producing more output (Kihlstrom & Mirman, 1974; Leland, 1972; Sandmo, 1971). For a risk-averse agent with a concave production function, this positive relationship between the random shock and the marginal product of effort leads to a negative covariance between marginal utility and marginal product. As a result, risk-averse firms produce less than their risk-neutral and risk-seeking counterparts.
Other research, though, considers inputs as substitutes for positive shocks in production. This assumption is fundamental to models of self-insurance and self-protection, where effort is a ‘protective activity’ (Dionne & Eeckhoudt, 1985; Ehrlich & Becker, 1972). In this case, a positive shock reduces the marginal value of the input. The idea is that, because protective effort is a substitute for good luck, a positive shock leaves less risk to offset, so each additional unit of protection yields lower marginal value if a positive state is realized. This reverses the sign of the covariance, leading risk-averse agents to optimally ‘over-invest’ in the risk-reducing activity, a result often termed a ‘precautionary’ increase in effort (Jullien et al., 1999).
Li and Peter (2021) examined how technology risk affects a single risk-mitigation effort within a self-insurance and prudence framework (Kimball, 1989), allowing inputs to be either substitutes for or complements to technology shocks. We, in contrast, model the trade-off between optimal allocation of effort between direct work and prompting an AI system whose output is stochastic. This distinction is crucial because our model shifts the focus from the agent’s endogenous misjudgments of the AI system’s ability to the exogenous errors made by noisy AI systems (Zamfirescu-Pereira et al., 2023). This framework is better suited to understand real-world trade-offs in modern AI workflows (Brynjolfsson et al., 2017).
A sizable area of literature in consumption-based capital asset pricing modeling (CCAPM) has also studied the same covariance mechanism from Section 3 extensively. Starting with exchange and intertemporal pricing results (Breeden, 1979; Lucas, 1978) and their empirical implementation (Hansen & Singleton, 1982), CCAPM values payoffs by how they co-move with the intertemporal marginal rate of substitution; insurance-like payoffs that deliver in ‘bad’ states are more valuable and therefore require a lower expected return (Cochrane, 2009). In this context, our setting treats normalized utility as the stochastic discount factor and the marginal product of AI enhancement as the return.
When prompting and AI shocks are substitutes, as assumed in Section 3, the marginal product of prompting is highest when the AI realizes a bad state. For a risk-averse agent, marginal utility is also highest in those states, so the covariance between marginal utility and the marginal product of prompting is positive. As a result, the risk-adjusted value of prompting is high, and the agent optimally accepts a lower expected marginal product than a risk-neutral benchmark, what we term rational “over-investment” in protection. By contrast, leveraging the same AI system, a risk-loving agent values additional performance more in good states; because the marginal product is largest in bad states, the covariance turns negative, and the optimum sets a higher expected marginal product, leading to an ‘under-investment’ relative to the risk-neutral benchmark. Similarly, production-based asset models ‘price’ technology by how their marginal products co-vary with the stochastic discount factor, placing our dual-task model squarely within that tradition (Cochrane, 1991, 2021).
Taken together, these separate areas of literature converge to a single point. Notably, when the marginal return on an input is highest in the most favorable states, rational agents invest more in that input, not less. A central contribution of the research presented here is the application of this idea to human–AI workflows, characterizing prompt-engineering effort as a form of self-protection that minimizes the downside risk of AI errors. By deriving the covariance condition from a simple production-risk model within a dual-task cognitive budget, our framework rationalizes why risk-averse individuals rationally “over-invest” into prompting.
This insight suggests that, controlling for prompting ability and task conditions, variation in prompting effort may be informative about underlying attitudes toward downside AI risk. At the same time, the behavioral literature does not imply a uniform response to AI outputs. Studies regarding this have demonstrated that algorithm aversion emphasizes increased vigilance after observed errors (Dietvorst et al., 2015) and that users may accept imperfect algorithms more readily when they retain limited control over the output (Dietvorst et al., 2018). However, the broader automation literature cautions that under-reliance and over-reliance can both arise depending on context (Parasuraman & Riley, 1997). The model presented here is intended for environments in which poor realized AI states induce corrective prompting and monitoring. Thus, the paper bridges resource-limited cognition and risk-sensitive production theory while providing a disciplined theoretical basis for future work on the economic forces shaping AI adoption, monitoring, and governance.

3. Theoretical Model

To formalize how an agent allocates effort between tasks, consider an agent with a finite cognitive bandwidth C R + + (Navon & Gopher, 1979; Simon, 1955; Sims, 2003). This agent allocates scarce cognitive resources between two competing tasks (Task 1 and Task 2) analogous to a dual-task paradigm studied extensively in psychology (Pashler, 1994) and economics (Buser & Peter, 2012; Holmstrom & Milgrom, 1991). In Task 1, the agent can complete the task themselves with effort, a 1 R + , or use an AI system to complete Task 1 on their behalf, but must allocate resources e R + to write and refine prompts. The effort e directed toward enhancing the AI system on Task 1 generates a product which the unassisted agent could have completed with α = f ( e , ξ ) units of effort, where f ( e , ξ ) is a strictly increasing, concave function and ξ is a ‘temperature’-like shock random variable with the integrable cumulative density function F ξ .1 At the same time, the agent must complete Task 2 on their own with the effort a 2 R + . In both tasks, the agent derives utility from their performance. Task 1 and 2 performances are denoted as p 1 = a 1 + α and p 2 = a 2 , respectively, according to the utility function U ( p 1 , p 2 ) (Drichoutis & Nayga, 2020), which is assumed to be twice differentiable and strictly increasing.
It is useful to interpret the choice variable e as overall prompting effort. In practice, this includes both the quantity of interaction with the model, such as repeated prompting or follow-up refinement, and the quality of prompt design, such as specificity, structure, and contextualization of instructions. Likewise, although the model treats the AI production function f ( e , ξ ) as fixed, prompting ability could differ across users. Holding the AI system fixed, users with greater prompting ability obtain a larger marginal increase in Task 1 performance from an additional unit of prompting effort, so f ( e , ξ ) e is larger at a given model state. We abstract from that additional source of heterogeneity to isolate the role of risk preferences in prompting behavior, so the comparative statics below should be interpreted conditional on prompting ability. The shock term ξ is likewise a reduced form, possibly capturing miscalibration, latent bias, or context-specific misalignment that affects AI performance independently of the user’s chosen effort. These simplifications are deliberate. The model holds prompting ability fixed, collapses prompt quantity and prompt quality into a single reduced-form choice variable, and treats AI error with a reduced-form stochastic term in order to isolate the risk-preference mechanism at the center of Theorem 1. This framework can be extended to allow for heterogeneous prompting ability, separate intensive and qualitative margins of prompting, and richer AI error distributions.
In a clinical setting, Task 1 can represent AI-assisted diagnosis or risk prediction, where performance depends jointly on human input and algorithmic output, while Task 2 represents non-automatable human judgment tasks such as contextual evaluation, ethical deliberation, or direct patient interaction. Within this interpretation, the stochastic shock ξ captures model miscalibration, training bias, or context-specific misalignment that affects AI performance independently of the user’s chosen effort. This framing illustrates how the formal structure applies to a multitude of regulated environments.
Since the AI system’s exact response is unknown before the prompt is submitted, the agent must make decisions to maximize the expected utility derived from the AI system’s response (Savage, 1972). It follows that the decision problem faced by the agent is given by Expression (1):
max a 1 , e , a 2 E [ U a 1 + f ( e , ξ ) , a 2 ] s . t . a 1 + e + a 2 = C , a 1 , e , a 2 0 .
For the model illustration, all costs of effort are normalized to 1 in the budget constraint a 1 + e + a 2 = C since all forms of cognitive effort are ultimately measured in uniform “units” drawn from the same finite resource pool (e.g., attention, working memory, mental bandwidth, etc.). This is supported by foundational work in cognitive psychology and economics on attention and working memory (Baddeley, 1992; Kahneman, 1973), data-limited and resource-limited processes (Norman & Bobrow, 1975), and the concept of cognitive bandwidth as an economic resource (Caplin, 2016; Wojtowicz & Loewenstein, 2023). Fundamentally, the effort exerted to write and refine AI prompts is equivalent to the effort exerted to perform the task directly as both draw from the same limited cognitive reserves (Zamfirescu-Pereira et al., 2023). Whether an agent exerts effort crafting a prompt to elicit a desired response from an AI or is manually performing the task unassisted, both utilizing activities draw from the same constrained mental resources C, similar to those outlined in the limited resource model (Baumeister et al., 2018) and in multi-task settings (Borghini et al., 2012). This approach mirrors standard microeconomic theory, where diverse resources are often aggregated into a single budget constraint and assigned a unitary “price” for analytical tractability, simplifying the analysis without loss of generality (Varian, 1992). By expressing all activities in terms of a common pool resource, the model focuses on the essential trade-offs between competing uses of effort and attention, rather than on arbitrary scaling factors that do not affect the underlying optimization problem (Mas-Colell et al., 1995). Therefore, the cognitive effort expended to improve an AI system is directly comparable to the effort required to complete the task independently, as they are essentially interchangeable claims on the same constrained cognitive resource.
This analysis focuses on the interior solution where all choice variables ( a 1 , e , a 2 ) are strictly positive. Notably, when focusing on an interior solution, standard first-order optimality conditions often suffice to characterize how the agent distributes resources across tasks. Intuitively, as AI systems become increasingly advanced, corresponding to a larger E [ f ( e , ξ ) e ] , people will integrate more AI enhancements into their workflows. This increase in productivity corresponds to a greater incentive for having AI systems assist with all of the work on people’s behalf. However, a trade-off of reallocating cognitive resources from completing the task unassisted to improving AI systems is the reduced oversight of the latent process determining the observed performance. As people spend a greater amount of time improving the AI system to complete a task on their behalf, rather than improving their own performance on the task, there will be reduced human oversight to ensure the validity of the final product. In essence, an interior solution in the case of the model’s framework guarantees that the agent leveraging AI (i.e., e * > 0 ) has some control over the process in Task 1 with their input a 1 * > 0 , while also attending to tasks which cannot be replaced with AI with their input a 2 * > 0 .
Further strengthening this concept, the interior solution to the decision problem (1) can be found using the Lagrangian multiplier method. Proposition 1 illustrates the first-order condition of the agent’s decision problem, providing insight into how people optimize the use of AI while ensuring that its work is accurate.
Proposition 1.
A unique interior stationary allocation ( a 1 * , e * , a 2 * ) that solves the decision problem (1) satisfies
Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e = E U ( p 1 * , p 2 * ) p 1 1 E f ( e * , ξ ) e
Proof. 
Provided in Appendix A.1. □
Expression (2) describes the relationship between the expected marginal product of AI enhancement (i.e., E [ f ( e * , ξ ) e ] ) and the expected marginal utility of increased task performance (i.e., E [ U ( p 1 * , p 2 * ) p 1 ] ) at the optimal effort allocation e * .2 While the former is the technological marginal product measure describing how an extra unit of prompting effort boosts performance of the AI, the latter is a preference-based marginal utility measure describing how an incremental increase in Task 1 performance boosts expected utility. If the AI system is degenerate, the error structure of the stochastic AI system is additive, or the agent is risk-neutral, the covariance term would be zero, leading to the standard condition E [ f ( e * , ξ ) e ] = 1 . We formalize this intuition with Corollary 1 of Proposition 1.
Corollary 1.
If the AI system is degenerate (i.e., Var ( ξ ) = 0 ), the AI error structure is additive (i.e., 2 f ( e * , ξ ) ξ e = 0 ), or the agent is risk neutral (i.e., U ( p 1 * , p 2 * ) p 1 = c > 0 ), and the optimal expected marginal product of AI enhancement equals the marginal product of working on Task 1 unassisted, i.e.,
Var ( ξ ) = 0 2 f ( e * , ξ ) ξ e = 0 U ( p 1 * , p 2 * ) p 1 = c E f ( e * , ξ ) e = 1
Proof. 
Provided in Appendix A.2. □
Similarly to the expected value of lotteries, E [ f ( e * , ξ ) e ] = 1 serves as the ‘rational’ baseline for risk-neutral agents. Much like how traditional risk-neutral agents are indifferent to the variance of lotteries, our risk-neutral agent is indifferent to the covariance between the marginal product of AI enhancement and the marginal utility of increased Task 1 performance. Because the marginal utility of performance is constant for a risk-neutral agent, they are indifferent to the AI’s ability to be more productive in high-stakes situations (Otten, 2009), as all situations hold the same marginal utility for them. Simply, if AI systems are unable to affect the marginal utility of task performance with their exogenous shocks, agents behave rationally in the traditional sense by exerting prompting effort to a level such that the expected marginal product of AI enhancement equals the marginal product of working on Task 1 unassisted.
This formulation implies that prompting effort carries a risk adjustment analogous to the pricing of risky assets in financial markets. When the marginal product of prompting covaries positively with marginal utility, the agent accepts a lower expected marginal product in exchange for insurance-like performance in adverse states, much like accepting a lower expected return for holding an asset that performs well during downturns. Conversely, when the covariance is negative, the agent requires a higher expected marginal product to justify additional effort. In institutional settings, this mechanism suggests that observed monitoring intensity may reflect an underlying risk adjustment embedded in governance decisions surrounding AI deployment.
However, the central premise of this paper hinges on how the AI shock ξ interacts with the marginal product of prompting effort f ( e * , ξ ) e . A large amount of literature on algorithm aversion finds that once people observe an algorithm make an error, they penalize it more harshly than they would a human and become more vigilant in monitoring subsequent outputs (Dietvorst et al., 2015). That vigilance, combined with the well-documented tendency to notice errors in others’ work more readily than in one’s own (Pronin & Hazel, 2023; Pronin & Kugler, 2007; Pronin et al., 2002), means that negative realizations of ξ trigger quick low-cost “local” fixes. These would be follow-up prompts that nudge the model back on track. This corrective mechanism is asymmetric: users actively correct negative AI shocks due to algorithm aversion but simply accept positive ones (Dietvorst et al., 2018; Lenskjold et al., 2023). Because the shock effectively crowds out ex-ante prompting effort, they are substitutes of each other. We formalize this idea with Assumption 1.
Assumption 1.
The AI shock ξ is a substitute for prompting effort e if the cross-partial derivative is negative:
2 f ( e , ξ ) e ξ < 0
Our interpretation of Assumption 1 is deliberately local rather than universal. Evidence on algorithm aversion shows that users often penalize algorithms after observing errors, while related work also shows that willingness to rely on algorithmic output can increase when users retain even limited control over how that output is adjusted (Dietvorst et al., 2018). More broadly, the automation literature emphasizes that both under-reliance and overreliance are possible, depending on the task environment and the design of human oversight (Parasuraman & Riley, 1997). Accordingly, Assumption 1 should be read as a scope condition for settings in which poor realized AI states induce corrective prompting and monitoring. In environments dominated by automation bias, passive reliance, or weak verification, the substitution between shocks and protective effort may be weaker, which is a natural extension for future work.
Assumption 1 implies that, as the AI performs better (worse) due to a positive (negative) shock, the incremental benefit of investing more prompting effort decreases (increases). When the AI underperforms by making an error—corresponding to a ‘low’ realized ξ ^ —a corrective prompt fixing the error leads to a big boost in AI performance, corresponding to ‘high’ marginal product of AI enhancement. Conversely, when the AI overperforms by producing something incredible—corresponding to a ‘high’ realized ξ ^ —a follow-up prompt leads to no further AI productivity gains (as you already have a high-quality response from the AI), corresponding to ‘low’ marginal product of AI enhancement. Put differently, Assumption 1 captures the idea that effort is most valuable when the AI needs fixing. In practice, this reflects how corrective prompts are most valuable when AI outputs are incorrect or misaligned, a phenomenon documented in algorithm-aversion research and clinical AI monitoring environments.
Building on this idea, an increase in ξ must lead to a decrease in the marginal product of AI enhancement f ( e * , ξ ) e under Assumption 1, since a follow-up corrective prompt is least effective when the AI does a good job. To determine the effect of a positive shock in ξ on the marginal utility of Task 1 performance U ( p 1 * , p 2 * ) p 1 , one can take the cross-partial derivative 2 U ( p 1 * , p 2 * ) p 1 ξ = 2 U ( p 1 * , p 2 * ) p 1 2 × f ( e * , ξ ) ξ . Under a common-sense assumption that a positive shock must increase AI output (i.e., f ( e * , ξ ) ξ > 0 ), the sign of 2 U ( p 1 * , p 2 * ) p 1 ξ is entirely determined by the sign of 2 U ( p 1 * , p 2 * ) p 1 2 and differentiates whether a positive shock in AI task performance increases or decreases the marginal utility of Task 1 performance. Specifically, 2 U ( p 1 * , p 2 * ) p 1 ξ is zero when the agent is risk-neutral ( 2 U ( p 1 * , p 2 * ) p 1 2 = 0 ), negative when the agent is risk-averse ( 2 U ( p 1 * , p 2 * ) p 1 2 < 0 ), and positive when the agent is risk-seeking ( 2 U ( p 1 * , p 2 * ) p 1 2 > 0 ); this implies marginal utility U ( p 1 * , p 2 * ) p 1 is (i) invariant to the shock in the case of risk neutrality, (ii) decreasing in a positive shock under risk aversion, and (iii) increasing in a positive shock for a risk-seeking agent. Since an increase in ξ leads to a decrease in the marginal product of AI enhancement f ( e * , ξ ) e under Assumption 1, Cov ( U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e ) must be (i) zero under risk neutrality, (ii) positive under risk aversion, and (iii) negative under risk loving.
From Proposition 1 and positive marginal utility of Task 1 performance U ( p 1 , p 2 ) p 1 > 0 , it follows that (i) Cov ( U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e ) = 0 under risk neutrality if and only if E [ f ( e * , ξ ) e ] = 1 , (ii) Cov ( U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e ) > 0 under risk aversion if and only if E [ f ( e * , ξ ) e ] < 1 , and (iii) Cov ( U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e ) < 0 under risk loving if and only if E [ f ( e * , ξ ) e ] > 1 . Put differently, if Assumption 1 holds and one observes the agent choose either E [ f ( e * , ξ ) e ] = 1 , E [ f ( e * , ξ ) e ] < 1 , or E [ f ( e * , ξ ) e ] > 1 , that choice is informative about the sign of 2 U ( p 1 * , p 2 * ) p 1 2 and thus about the curvature of the utility function over Task 1 performance at the stationary point ( a 1 * , e * , a 2 * ) . We formalize this intuition below with Theorem 1.
Theorem 1.
If the AI system is stochastic and Assumption 1 holds, then
1. 
E f ( e * , ξ ) e < 1 if and only if the agent is risk-averse.
2. 
E f ( e * , ξ ) e = 1 if and only if the agent is risk-neutral.
3. 
E f ( e * , ξ ) e > 1 if and only if the agent is risk-seeking.
Proof. 
Provided in Appendix A.3. □
Economically, Theorem 1 states that prompting is valued not only by its average marginal product, but also by when that marginal product is realized. When prompting is most productive in bad AI states, it provides insurance-like performance exactly when a risk-averse agent values additional Task 1 performance most, so the agent optimally accepts a lower expected marginal product than the risk-neutral benchmark. The opposite logic applies for a risk-seeking agent. Figure 1 illustrates this benchmark-relative comparison under diminishing returns: risk-averse agents rationally “over-invest” in AI enhancement relative to the risk-neutral benchmark E [ f ( e * , ξ ) e ] = 1 , while risk-seeking agents “under-invest.” Within the model, investment in AI enhancement is therefore informative about the curvature of utility over task performance.
While the AI production function f is the same across risk-averse and risk-seeking agents (strictly increasing and exhibiting diminishing returns), the covariance alignment on the LHS of Expression (2) might be taken as effectively ‘inflating’ and ‘deflating’ the AI production function, as shown in Figure 2. AI systems with positive covariance alignment are effectively more productive because their marginal output is greatest in conditions where that output is most valuable to the agent. Conversely, if a system’s marginal output is highest when that output is least valuable, its effective productivity is diminished because each unit of prompting effort will likely pay off only when the outcome is least important. Conversely, agents have a strong incentive to invest more into AI systems that deliver in states that matter most, as better performance leads to even more payoffs when deploying positively aligned AI systems. Put together, since AI systems exhibit diminishing returns (Thompson et al., 2021), AI systems perform best (worst) in states that matter most (least) to risk-averse (risk-seeking) agents, leading to a hidden ‘boost’ (‘contraction’) of productivity in the use of AI systems.

4. Cobb-Douglas Example

To illustrate the model’s mechanics, consider an agent with a symmetric Cobb–Douglas where α = β = δ > 0 , i.e., U ( p 1 , p 2 ) = ( p 1 p 2 ) δ . Let the AI production function be f ( e , ξ ) = ( e + ξ ) ρ such that ρ ( 0 , 1 ) . The decision problem in this special case is
max a 1 , e , a 2 E [ a 1 + ( e + ξ ) ρ δ ] a 2 δ s . t . a 1 + e + a 2 = C , a 1 , e , a 2 0 .
Substituting the cognitive constraint into the utility function and taking the first-order condition with respect to e yields
E a 1 + ( e + ξ ) ρ δ 1 ρ ( e + ξ ) ρ 1 1 = 0
This expression can be rearranged into the covariance form from Proposition 1 as
Cov a 1 + ( e + ξ ) ρ δ 1 , ρ ( e + ξ ) ρ 1 = E a 1 + ( e + ξ ) ρ δ 1 1 E ρ ( e + ξ ) ρ 1
Since Cov a 1 + ( e + ξ ) ρ δ 1 , ρ ( e + ξ ) ρ 1 = ρ Cov a 1 + ( e + ξ ) ρ δ 1 , ( e + ξ ) ρ 1 and δ ( 0 , 1 ) , we can say that the covariance is proportional to the LHS of Expression 3, i.e.,
Cov a 1 + ( e + ξ ) ρ δ 1 , ( e + ξ ) ρ 1 E a 1 + ( e + ξ ) ρ δ 1 1 E ρ ( e + ξ ) ρ 1
Finally, we plug in p 1 = a 1 + f ( e , ξ ) into the LHS:
Cov ( p 1 δ 1 , [ p 1 a 1 ] ρ 1 ρ ) E a 1 + ( e + ξ ) ρ δ 1 1 E ρ ( e + ξ ) ρ 1
The left-hand side of Expression (4) reveals that the sign of the covariance term depends on the value of δ , which governs the curvature of the agent’s utility function. In particular, since ρ ( 0 , 1 ) due to the inherent diminishing returns to AI systems (Brynjolfsson et al., 2017), p 1 δ 1 moves in the same direction in ξ as [ p 1 a 1 ] ρ 1 ρ when δ ( 0 , 1 ) (corresponding to concave Task 1 utility), the opposite direction when δ > 1 (corresponding to convex Task 1 utility), and zero when δ = 1 (corresponding to linear Task 1 utility). In turn, these results correspond to the conditions below.
Risk Aversion ( 1 > δ > 0 ) : Cov ( p 1 δ 1 , [ p 1 a 1 ] ρ 1 ρ ) > 0 E ρ ( e * + ξ ) ρ 1 < 1
Risk Neutral ( δ = 1 ) : Cov ( p 1 δ 1 , [ p 1 a 1 ] ρ 1 ρ ) = 0 E ρ ( e * + ξ ) ρ 1 = 1
Risk Loving ( δ > 1 ) : Cov ) p 1 δ 1 , [ p 1 a 1 ] ρ 1 ρ ) < 0 E ρ ( e * + ξ ) ρ 1 > 1
This example illustrates the main intuition of Theorem 1. Specifically, if an AI’s random shocks are substitutes for prompting effort, then an agent’s decision to over- or under-invest in prompting relative to the risk-neutral benchmark is informative about the curvature of utility over Task 1 performance. In institutional AI deployment, the same logic implies that successive layers of monitoring or verification may yield diminishing marginal improvements in expected safety.

5. Conclusions

As artificial intelligence becomes integral to professional workflows, a micro-founded understanding of how individuals should optimally manage its use is essential. This paper develops a model of an agent allocating scarce cognitive resources between direct work and prompting a noisy AI assistant. Our primary contribution is to shift the theoretical focus from the endogenous mistakes of users to the exogenous, stochastic errors inherent to modern AI systems. By treating corrective prompting as a form of self-insurance against the downside risk of AI errors, we provide a new framework for analyzing technology adoption under uncertainty.
This framework shows that randomness in AI performance is not merely a potential flaw within the model. It is the feature that allows an agent’s underlying preferences to be expressed through their actions. Our central result, stemming from Theorem 1, formalizes this relationship by showing that optimal prompting effort is determined by an agent’s risk preferences. The key mechanism is the covariance between the marginal utility of task performance and the marginal product of AI enhancement. For risk-averse agents, this covariance alignment is positive, effectively boosting the perceived productivity of prompting and leading them to over-invest in it relative to a risk-neutral benchmark. For risk-seeking agents, the alignment is negative, leading them to under-invest.
Moreover, this mechanism of self-insurance in effortful situations is not limited to text prompting: it rationalizes a wide range of oversight problems where protective effort is a substitute for exogenous errors, including the principal-agent problem. More broadly, the equivalence of Euler’s equation from standard consumption-based asset price theory to the first-order condition defined by Proposition 1 illustrates that protective effort carries a risk adjustment: the shadow price of effort equals the expected marginal product scaled by a covariance with marginal utility. This observation parallels the classic 1 = E [ m R ] pricing condition, with m proportional to marginal utility and R to the marginal product of protection (Cochrane, 2021; Duffie & Zame, 1989; Lucas, 1978). Beyond determining the effect of risk attitudes on prompting effort, our framework connects the effort-allocation problem to production-based asset-pricing theory.
Beyond its theoretical implications, the model also has consequences for institutional AI governance. It implies that organizations with more concave objective functions or with greater downside exposure (e.g., regulatory, reputational, or liability-related) have stronger incentives to allocate resources to monitoring, prompt refinement, and verification when AI performance is stochastic and corrective effort is most valuable in bad states. As such, heterogeneity in oversight would not reflect simple inefficiency or generalized skepticism toward AI. Instead, it may reflect rational differences in exposure to downside risk.
At the same time, we do not claim that prompting effort provides a direct empirical measure of risk preferences in any unrestricted setting. Rather, the model establishes a theoretical mapping under which, controlling for the task environment and prompting ability, greater protective prompting is consistent with greater aversion to downside AI risk. Thus, prompt engineering is not merely a technical exercise. It can serve as an economically meaningful form of protective effort in human–AI production. This perspective offers a disciplined basis for future work on the economic forces shaping AI adoption, monitoring, and governance.

Author Contributions

B.A.T. and G.G.L. developed the theoretical framework and drafted the manuscript. A.A.O. formulated the applied examples and contributed to the writing and critical revision of the text. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proofs

Appendix A.1. Proof of Proposition 1

Proof. 
We focus on an interior solution ( a 1 , e , a 2 > 0 ). Setting up the Lagrangian and taking first-order conditions with respect to a 1 , e, and a 2 , we find that the marginal expected utility from each type of effort must be equal to the Lagrange multiplier λ :
E U ( p 1 * , p 2 * ) p 1 · 1 = λ E U ( p 1 * , p 2 * ) p 1 · f ( e * , ξ ) e = λ E U ( p 1 * , p 2 * ) p 2 = λ
Equating the first two conditions (for direct Task 1 effort and AI effort) yields
E U ( p 1 * , p 2 * ) p 1 = E U ( p 1 * , p 2 * ) p 1 · f ( e * , ξ ) e
Using the definition of covariance, Cov ( X , Y ) = E [ X Y ] E [ X ] E [ Y ] , we can rewrite this as
E U ( p 1 * , p 2 * ) p 1 = Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e + E U ( p 1 * , p 2 * ) p 1 E f ( e * , ξ ) e
Rearranging gives the key relationship
Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e = E U ( p 1 * , p 2 * ) p 1 1 E f ( e * , ξ ) e

Appendix A.2. Proof of Corollary 1

Proof. 
In the case of Var ( ξ ) = 0 , U ( p 1 * , p 2 * ) p 1 and f ( e * , ξ ) e are constants, which implies Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e = 0 . From Proposition 1, E f ( e * , ξ ) e = 1 must also be true. In the case of 2 f ( e * , ξ ) ξ e = 0 , f ( e * , ξ ) e is independent of ξ so must be a constant, which implies Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e = 0 . From Proposition 1, E f ( e * , ξ ) e = 1 must also be true. In the case of U ( p 1 * , p 2 * ) p 1 = c > 0 , plug c into the LHS of Expression (2) to yield Cov c , f ( e * , ξ ) e . Since c is a constant, Cov c , f ( e * , ξ ) e = 0 . From Proposition 1, E f ( e * , ξ ) e = 1 must also be true. □

Appendix A.3. Proof of Theorem 1

Proof. 
From Proposition 1, we have
Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e = E U ( p 1 * , p 2 * ) p 1 1 E f ( e * , ξ ) e
where E U ( p 1 * , p 2 * ) p 1 > 0 . Because f is increasing in ξ , f ( e , ξ ) ξ > 0 . Under Assumption 1, f ( e * , ξ ) e is strictly decreasing in ξ . Applying chain-rule when differentiating U ( p 1 * , p 2 * ) p 1 with respect to ξ :
2 U p 1 * , p 2 * p 1 ξ = 2 U p 1 * , p 2 * p 1 2 × f ( e * , ξ ) ξ
Since f ( e * , ξ ) ξ > 0 , the sign of 2 U ( p 1 * , p 2 * ) p 1 ξ is entirely determined by the sign of the second derivative 2 U ( p 1 * , p 2 * ) p 1 2 = U 11 . Since f ( e * , ξ ) e is decreasing in ξ and the direction of ξ in U ( p 1 * , p 2 * ) p 1 depends on U 11 , it follows that sign Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e = + U 11 < 0 0 U 11 = 0 U 11 > 0 because two strictly monotone functions of opposite (same) direction are negatively (positively) correlated, and a constant has zero covariance. Since E [ U ( p 1 * , p 2 * ) p 1 ] > 0 , Proposition 1 implies
sign Cov U ( p 1 * , p 2 * ) p 1 , f ( e * , ξ ) e = sign 1 E [ f ( e * , ξ ) e ]
Hence, as required,
E [ f ( e * , ξ ) e ] < 1 U 11 < 0
E [ f ( e * , ξ ) e ] = 1 U 11 = 0
E [ f ( e * , ξ ) e ] > 1 U 11 > 0

Notes

1
Crucially, the shock ξ captures the state-contingent performance of a fixed AI system, not a parameter for its overall quality; a fundamental improvement in the AI system would be modeled as an upward shift in the function f ( e , ξ ) itself (Jahani et al., 2024).
2
One can rewrite Expression (2) as the Asset Pricing Euler Equation (Cochrane, 2021; Duffie & Zame, 1989; Lucas, 1978): 1 = E m R , where the stochastic discount factor is m = U / p 1 E [ U / p 1 ] and the ‘return’ on prompting effort is R = f ( e , ξ ) e .

References

  1. Autor, D. H., Katz, L. F., & Krueger, A. B. (1998). Computing inequality: Have computers changed the labor market? The Quarterly Journal of Economics, 113(4), 1169–1213. [Google Scholar] [CrossRef]
  2. Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics, 118(4), 1279–1333. [Google Scholar] [CrossRef]
  3. Baddeley, A. (1992). Working memory. Science, 255, 556–559. [Google Scholar] [CrossRef]
  4. Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (2018). Ego depletion: Is the active self a limited resource? In Self-regulation and self-control (pp. 16–44). Routledge. [Google Scholar]
  5. Becker, G. S. (1965). A theory of the allocation of time. The Economic Journal, 75(299), 493–517. [Google Scholar] [CrossRef]
  6. Borghini, G., Vecchiato, G., Toppi, J., Astolfi, L., Maglione, A., Isabella, R., Caltagirone, C., Kong, W., Wei, D., Zhou, Z., Polidori, L., Vitiello, S., & Babiloni, F. (2012). Assessment of mental fatigue during car driving by using high-resolution EEG activity and neurophysiologic indices. In 2012 Annual international conference of the IEEE engineering in medicine and biology society (pp. 6442–6445). IEEE. [Google Scholar]
  7. Breeden, D. T. (1979). An intertemporal asset pricing model with stochastic consumption and investment opportunities. Journal of Financial Economics, 7(3), 265–296. [Google Scholar] [CrossRef]
  8. Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(2), 889–942. [Google Scholar] [CrossRef]
  9. Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. National Bureau of Economic Research. [Google Scholar]
  10. Buser, T., & Peter, N. (2012). Multitasking. Experimental Economics, 15(4), 641–655. [Google Scholar] [CrossRef]
  11. Caplin, A. (2016). Measuring and modeling attention. Annual Review of Economics, 8, 379–403. [Google Scholar] [CrossRef]
  12. Chen, E., Saenz, A., Banerjee, O., Marklund, H., Zhang, X., Johri, S., Zhou, H.-Y., Luo, L., Adithan, S., Wu, K., Dogra, S., Reddi, V. J., Buensalido, D., Kavnoudias, H., Kloeckner, R., Müller, L., Salinas-Miranda, E., Vega, M. J. V., Kolck, J., … Rajpurkar, P. (2025). International retrospective observational study of continual learning for AI on endotracheal tube placement from chest radiographs. NEJM AI, 3(1), AIoa2500522. [Google Scholar] [CrossRef]
  13. Cochrane, J. H. (1991). Production-based asset pricing and the link between stock returns and economic fluctuations. Journal of Finance, 46(1), 209–237. [Google Scholar]
  14. Cochrane, J. H. (2009). Asset pricing: Revised edition. Princeton University Press. [Google Scholar]
  15. Cochrane, J. H. (2021). Rethinking production under uncertainty. The Review of Asset Pricing Studies, 11(1), 1–59. [Google Scholar] [CrossRef]
  16. Comin, D., & Hobijn, B. (2010). An exploration of technology diffusion. American Economic Review, 100(5), 2031–2059. [Google Scholar] [CrossRef]
  17. Deck, C., & Jahedi, S. (2015). The effect of cognitive load on economic decision making: A survey and new experiments. European Economic Review, 78, 97–119. [Google Scholar] [CrossRef]
  18. Dell’Acqua, F., McFowland, E., III, Mollick, E., Lifshitz, H., Kellogg, K. C., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2026). Navigating the jagged technological frontier: Field experimental evidence of the effects of artificial intelligence on knowledge worker productivity and quality. Organization Science. [Google Scholar]
  19. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. [Google Scholar] [CrossRef]
  20. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. [Google Scholar] [CrossRef]
  21. Dionne, G., & Eeckhoudt, L. (1985). Self-insurance, self-protection and increased risk aversion. Economics Letters, 17(1–2), 39–42. [Google Scholar] [CrossRef]
  22. Drichoutis, A. C., & Nayga, R. M. (2020). Economic rationality under cognitive load. The Economic Journal, 130(632), 2382–2409. [Google Scholar] [CrossRef]
  23. Duffie, D., & Zame, W. (1989). The consumption-based capital asset pricing model. Econometrica, 57(6), 1279–1297. [Google Scholar] [CrossRef]
  24. Dumont, C., Subramanian, S., & Dankert, C. (2015). Staking your claim in the healthcare gold rush. Strategy+Business. Available online: https://www.strategy-business.com/article/00353 (accessed on 15 February 2026).
  25. Ehrlich, I., & Becker, G. S. (1972). Market insurance, self-insurance, and self-protection. Journal of Political Economy, 80(4), 623–648. [Google Scholar] [CrossRef]
  26. Engle, R. W. (2018). Working memory and executive attention: A revisit. Perspectives on Psychological Science, 13(2), 190–193. [Google Scholar] [CrossRef]
  27. Fischer, R., & Plessow, F. (2015). Efficient multitasking: Parallel versus serial processing of multiple tasks. Frontiers in Psychology, 6, 1366. [Google Scholar] [CrossRef]
  28. Flora, D., & Maniago, R. (2025). The unseen revolution: How artificial intelligence is redefining cancer care. In NEJM AI sponsored. Massachusetts Medical Society. [Google Scholar]
  29. Goldberg, C., Balicer, R. D., Bhat, M., Blumenthal, D., Brendel, R. W., Brondolo, E., Brownstein, J. S., Buckley, T. A., Cain, C. H., Chandak, P., & Chessa, F. (2026). The missing dimension in clinical AI: Making hidden values visible. NEJM AI, 3(2), AIp2501266. [Google Scholar] [CrossRef]
  30. Hagger, M. S., Wood, C., Stiff, C., & Chatzisarantis, N. L. (2010). Ego depletion and the strength model of self-control: A meta-analysis. Psychological Bulletin, 136(4), 495–525. [Google Scholar] [CrossRef]
  31. Hansen, L. P., & Singleton, K. J. (1982). Generalized instrumental variables estimation of nonlinear rational expectations models. Econometrica, 50(5), 1269–1286. [Google Scholar] [CrossRef]
  32. Holmstrom, B., & Milgrom, P. (1991). Multitask principal–agent analyses: Incentive contracts, asset ownership, and job design. The Journal of Law, Economics, and Organization, 7, 24–52. [Google Scholar] [CrossRef]
  33. Hommel, B. (2020). Dual-task performance: Theoretical analysis and an event-coding account. Journal of Cognition, 3(1), 29. [Google Scholar] [CrossRef]
  34. Inzlicht, M., Werner, K. M., Briskin, J. L., & Roberts, B. W. (2021). Integrating models of self-regulation. Annual Review of Psychology, 72, 319–345. [Google Scholar] [CrossRef] [PubMed]
  35. Jahani, E., Manning, B., Zhang, J., TuYe, H. Y., Alsobay, M. A. M., Nicolaides, C., Suri, S., & Holtz, D. (2024). As generative models improve, people adapt their prompts (No. 9rhku). Center for Open Science. [Google Scholar]
  36. Jullien, B., Salanie, B., & Salanie, F. (1999). Should more risk-averse agents exert more effort? The Geneva Papers on Risk and Insurance Theory, 24, 19–28. [Google Scholar] [CrossRef]
  37. Kahneman, D. (1973). Attention and effort. Prentice-Hall. [Google Scholar]
  38. Kihlstrom, R. E., & Mirman, L. J. (1974). Risk aversion with many commodities. Journal of Economic Theory, 8(3), 361–388. [Google Scholar] [CrossRef]
  39. Kim, K., & Zauberman, G. (2019). The effect of music tempo on consumer impatience in intertemporal decisions. European Journal of Marketing, 53(3), 504–523. [Google Scholar] [CrossRef]
  40. Kimball, M. S. (1989). Precautionary saving in the small and in the large. National Bureau of Economic Research. [Google Scholar]
  41. Leland, H. E. (1972). Theory of the firm facing uncertain demand. The American Economic Review, 62(3), 278–291. [Google Scholar]
  42. Lenskjold, A., Nybing, J. U., Trampedach, C., Galsgaard, A., Brejnebøl, M. W., Raaschou, H., Rose, M. H., & Boesen, M. (2023). Should artificial intelligence have lower acceptable error rates than humans? BJR Open, 5(1), 20220053. [Google Scholar]
  43. Li, L., & Peter, R. (2021). Should we do more when we know less? The effect of technology risk on optimal effort. Journal of Risk and Insurance, 88(3), 695–725. [Google Scholar] [CrossRef]
  44. Loewenstein, G. (1992). Anomalies of intertemporal choice: Evidence and interpretation. In Choice over time (pp. 119–145). Russell Sage Foundation. [Google Scholar]
  45. Lucas, R. E. (1978). Asset prices in an exchange economy. Econometrica, 46(6), 1429–1445. [Google Scholar] [CrossRef]
  46. Mas-Colell, A., Whinston, M. D., & Green, J. R. (1995). Microeconomic theory. Oxford University Press. [Google Scholar]
  47. Navon, D., & Gopher, D. (1979). On the economy of the human-processing system. Psychological Review, 86(3), 214–255. [Google Scholar] [CrossRef]
  48. Norman, D. A., & Bobrow, D. G. (1975). On data-limited and resource-limited processes. Cognitive Psychology, 7(1), 44–64. [Google Scholar] [CrossRef]
  49. Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192. [Google Scholar] [CrossRef]
  50. Otten, M. (2009). Choking vs. clutch performance: A study of sport performance under pressure. Journal of Sport and Exercise Psychology, 31(5), 583–601. [Google Scholar] [CrossRef]
  51. Palma, M. A., Segovia, M. S., Kassas, B., Ribera, L. A., & Hall, C. R. (2018). Self-control: Knowledge or perishable resource? Journal of Economic Behavior & Organization, 145, 80–94. [Google Scholar] [CrossRef]
  52. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. [Google Scholar] [CrossRef]
  53. Pashler, H. (1994). Dual-task interference in simple tasks: Data and theory. Psychological Bulletin, 116(2), 220–244. [Google Scholar] [CrossRef]
  54. Pronin, E., & Hazel, L. (2023). Humans’ bias blind spot and its societal significance. Current Directions in Psychological Science, 32(5), 402–409. [Google Scholar] [CrossRef]
  55. Pronin, E., & Kugler, M. B. (2007). Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot. Journal of Experimental Social Psychology, 43(4), 565–578. [Google Scholar] [CrossRef]
  56. Pronin, E., Lin, D. Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28(3), 369–381. [Google Scholar] [CrossRef]
  57. Sahoo, P., Singh, A. K., Saha, S., Jain, V., Mondal, S., & Chadha, A. (2024). A systematic survey of prompt engineering in large language models: Techniques and applications. arXiv, arXiv:2402.07927. [Google Scholar] [CrossRef]
  58. Salvucci, D. D., & Taatgen, N. A. (2008). Threaded cognition: An integrated theory of concurrent multitasking. Psychological Review, 115(1), 101–130. [Google Scholar] [CrossRef]
  59. Sandmo, A. (1971). On the theory of the competitive firm under price uncertainty. The American Economic Review, 61(1), 65–73. [Google Scholar]
  60. Savage, L. J. (1972). The foundations of statistics. Courier Corporation. [Google Scholar]
  61. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118. [Google Scholar] [CrossRef]
  62. Sims, C. A. (2003). Implications of rational inattention. Journal of Monetary Economics, 50(3), 665–690. [Google Scholar] [CrossRef]
  63. Starke, G. (2025). More to know could not be more to trust: Open communication as a moral imperative for AI systems in healthcare. The American Journal of Bioethics, 25(3), 119–121. [Google Scholar] [CrossRef]
  64. Strobach, T., Becker, M., Schubert, T., & Kühn, S. (2015). Better dual-task processing in simultaneous interpreters. Frontiers in Psychology, 6, 1590. [Google Scholar] [CrossRef] [PubMed]
  65. Telford, C. W. (1931). The refractory phase of voluntary and associative responses. Journal of Experimental Psychology, 14(1), 1–36. [Google Scholar] [CrossRef]
  66. Thaler, R. H. (2016). Behavioral economics: Past, present, and future. American Economic Review, 106(7), 1577–1600. [Google Scholar] [CrossRef]
  67. Thompson, N. C., Greenewald, K., Lee, K., & Manso, G. F. (2021). Deep learning’s diminishing returns: The cost of improvement is becoming unsustainable. IEEE Spectrum, 58(10), 50–55. [Google Scholar] [CrossRef]
  68. Tuk, M. A., Zhang, K., & Sweldens, S. (2015). The propagation of self-control: Self-control in one domain simultaneously improves self-control in other domains. Journal of Experimental Psychology: General, 144(4), 639–654. [Google Scholar] [CrossRef]
  69. Varian, H. R. (1992). Microeconomic analysis (3rd ed.). W. W. Norton & Company. [Google Scholar]
  70. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008. [Google Scholar]
  71. Vince, M. A. (1949). Rapid response sequences and the psychological refractory period. British Journal of Psychology, 40(1), 23. [Google Scholar]
  72. Von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behavior (2nd rev. ed.). Princeton University Press. [Google Scholar]
  73. Welford, A. T. (1952). The psychological refractory period and the timing of high-speed performance—A review and a theory. British Journal of Psychology, 43(1), 2–19. [Google Scholar] [CrossRef]
  74. Wojtowicz, Z., & Loewenstein, G. (2023). Cognition: A study in mental economy. Cognitive Science, 47(2), e13252. [Google Scholar] [CrossRef]
  75. Zamfirescu-Pereira, J., Wong, R. Y., Hartmann, B., & Yang, Q. (2023, April 23–28). Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts. 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–21), Hamburg, Germany. [Google Scholar]
Figure 1. On the x-axis is the prompting effort e. On the y-axis is the expected marginal product of AI enhancement, E [ f ( e * , ξ ) e ] . Controlling for the underlying AI technology and cognitive budget, a positive covariance alignment implies that a risk-averse agent accepts a lower expected marginal product and therefore chooses more prompting effort than the risk-neutral benchmark (over-investment relative to that benchmark), whereas a negative covariance alignment implies that a risk-seeking agent requires a higher expected marginal product and therefore chooses less prompting effort (under-investment relative to that benchmark). The figure illustrates how the same diminishing-returns technology can generate different optimal effort allocations once risk preferences affect the covariance term in Proposition 1.
Figure 1. On the x-axis is the prompting effort e. On the y-axis is the expected marginal product of AI enhancement, E [ f ( e * , ξ ) e ] . Controlling for the underlying AI technology and cognitive budget, a positive covariance alignment implies that a risk-averse agent accepts a lower expected marginal product and therefore chooses more prompting effort than the risk-neutral benchmark (over-investment relative to that benchmark), whereas a negative covariance alignment implies that a risk-seeking agent requires a higher expected marginal product and therefore chooses less prompting effort (under-investment relative to that benchmark). The figure illustrates how the same diminishing-returns technology can generate different optimal effort allocations once risk preferences affect the covariance term in Proposition 1.
Jrfm 19 00269 g001
Figure 2. On the x-axis is the prompting effort e. On the y-axis is the effective AI output. The black curve is the common risk-neutral baseline, and the blue line is the normalized unit effort cost. Holding the underlying AI production function fixed, a positive covariance raises the effective value of prompting in bad states and shifts the effective schedule upward, while a negative covariance lowers the effective value of prompting and shifts the effective schedule downward. The figure therefore illustrates the intuition behind Theorem 1. Even when the underlying AI technology is identical across agents, risk preferences alter the effective payoff to prompting and generate e A * > e N * > e L * relative to the same risk-neutral benchmark.
Figure 2. On the x-axis is the prompting effort e. On the y-axis is the effective AI output. The black curve is the common risk-neutral baseline, and the blue line is the normalized unit effort cost. Holding the underlying AI production function fixed, a positive covariance raises the effective value of prompting in bad states and shifts the effective schedule upward, while a negative covariance lowers the effective value of prompting and shifts the effective schedule downward. The figure therefore illustrates the intuition behind Theorem 1. Even when the underlying AI technology is identical across agents, risk preferences alter the effective payoff to prompting and generate e A * > e N * > e L * relative to the same risk-neutral benchmark.
Jrfm 19 00269 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Toney, B.A.; Lubiani, G.G.; Okunade, A.A. Revealing Risk Preferences Through AI Prompting Effort. J. Risk Financial Manag. 2026, 19, 269. https://doi.org/10.3390/jrfm19040269

AMA Style

Toney BA, Lubiani GG, Okunade AA. Revealing Risk Preferences Through AI Prompting Effort. Journal of Risk and Financial Management. 2026; 19(4):269. https://doi.org/10.3390/jrfm19040269

Chicago/Turabian Style

Toney, Brian A., Gregory G. Lubiani, and Albert A. Okunade. 2026. "Revealing Risk Preferences Through AI Prompting Effort" Journal of Risk and Financial Management 19, no. 4: 269. https://doi.org/10.3390/jrfm19040269

APA Style

Toney, B. A., Lubiani, G. G., & Okunade, A. A. (2026). Revealing Risk Preferences Through AI Prompting Effort. Journal of Risk and Financial Management, 19(4), 269. https://doi.org/10.3390/jrfm19040269

Article Metrics

Back to TopTop