Next Article in Journal
Monopoly, Multi-Product Quality, Consumer Heterogeneity, and Market Segmentation
Previous Article in Journal
Asymptotic Thresholds for (a:b) Minimum-Degree Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Opinion

The Agentic Perspective in Experimental Economics

Banco de España, 28014 Madrid, Spain
Games 2025, 16(5), 48; https://doi.org/10.3390/g16050048
Submission received: 6 May 2025 / Revised: 25 August 2025 / Accepted: 3 September 2025 / Published: 8 September 2025
(This article belongs to the Section Learning and Evolution in Games)

Abstract

Mainstream experimental economics is characterized by its focus on theory testing and “treatment effects” on aggregate outcomes. The “agentic” alternative is concerned with the econometric specification of individual behavior. In this essay, first, a literature review of agentic experimental economics is provided, and a stylized workflow is proposed to produce and validate econometric models of individual behavior based on experimental data: (i) create a baseline (“optimal”) behavioral benchmark (by analytical means or reinforcement learning) for the considered multi-agent game, (ii) conduct experiments with human subjects, (iii) use the experimental results to characterize the structure of the deviations from the baseline behavior, and (iv) re-run the experiment with artificial agents calibrated in the previous step, and compare the outcomes with those of the human experiment. Two papers have been selected to illustrate the successful use of the proposed workflow. Finally, the relations between agent-based and experimental economics are discussed after deep learning has “tamed” the curse of dimensionality.

1. Introduction

In this essay, the mainstream of experimental economics (focused on the testing of economic theory and treatment effects on aggregate outcomes) is contrasted with what we name “agentic” experimental economics, which aims to econometrically characterize individual behavior. Then, the literature on agentic experimental economics is briefly reviewed, a methodological workstream for experimental economics is outlined, and its use is illustrated in two selected papers.
This paper is organized in the following manner: In Section 2, it is argued that modern economics is mostly defined by its commitment to methodological individualism, and the epistemological relations between neoclassical, behavioral, agent-based, and experimental economics are briefly discussed. This discussion is used to defend the view that agentic experimental economics is likely to be a foundational subfield in economics. In Section 3, the main literature on agentic experimental economics is reviewed, and two papers are selected as canonical examples of this methodological perspective. In Section 4, a workflow to perform agentic experimental economics is outlined. In Section 5, the two selected papers are described, and it is shown that they fit in the workflow. Section 6 discusses how deep learning techniques affect first agent-based economics (by “taming” the curse of dimensionality and allowing for more complex and realist models), and then the role of experiments in a world of easily available superhuman optimizers. Section 7 concludes this essay.

2. Methodological Individualism and Agentic Experimental Economics

While modern economics is often interested in large-scale phenomena (such as Keynesian “aggregate demand”), its methodology is reductionist. Its relative insularity in the realm of social science (Fourcade et al., 2015), its “imperialist” vocation (Lazear, 2000), and its extreme mathematization come from the commitment to methodological individualism (Ross, 2005; Myerson, 1999), which is simply the name of general scientific reductionism in social science.
Neoclassical economics results from the combination of methodological individualism and strong rationality assumptions. The field of “behavioral economics” appears by relaxing rationality (but keeping methodological individualism). Finally, the use of individual-level simulation leads to “agent-based economics”, and agent-based models are often built via the relaxation of neoclassical models to accommodate heterogeneity, market friction, or bounded rationality (e.g., Lengnick, 2013).
Structural econometrics (Keane, 2010) is the branch of economic science that uses data to build statistical models that capture individual behavior in a structural model. There is a large amount of literature on the econometric modeling of behavioral error with observational data (Bellemare, 2023; Conte & Moffatt, 2014) because many relevant economic situations (corporate and labor decisions, saving and investment decisions in the life cycle, etc.) can only be imperfectly replicated in the laboratory.
Meanwhile, experiments allow for the exogenous variation of different components and, consequently, are especially suited for the estimation of structural behavioral models, while the external validity of the results is often more difficult to establish (Schram, 2005; Fréchette & Schotter, 2015). The rest of this article is about the estimation of econometric models of individual behavior with experimental data, which we see as the most important use of economic experiments.

3. Literature Review of Agentic Experimental Economics

Mainstream experimental economics is focused on (i) testing in the laboratory the validity of the classical results from theoretical economics (Smith, 1994), (ii) characterizing the impact of different setups of a game (“treatments”) on the selected outcomes of that game (Czibor et al., 2019), and (iii) for economic phenomena that do not have a theory yet, experimental economics can still play an important role in generating data and help the development of a theory (see the example of educational experiments in Kenya in Kremer (2020) and Reiley (2015) for a general discussion of this line of research).
In this study, we argue that the central mission of experimental economics must be to calibrate artificial replicas of the experimental subjects. McElreath (2020) suggests the golem as a metaphor for statistical models: “Scientists also make golems. Our golems rarely have physical form, but they too are often made of clay, living in silicon as computer code. These golems are scientific models. […]. A concern with truth enlivens these models, but just like a golem or a modern robot, scientific models are neither true nor false, neither prophets nor charlatans. Rather they are constructs engineered for some purpose.
The agentic approach to experimental economics was already suggested in Heckbert (2009), and an early literature review of combined agent-based and experimental economics results appears in the Handbook of Computational Economics (Duffy, 2006). A landmark result was Gode and Sunder (1993), who introduced the concept of “zero-intelligence traders”, that is, algorithms trading in a double auction market under the constraint of offering random individually profitable trades without any learning. These “zero intelligence” traders, combined with certain market institutions, generate prices and allocative efficiency approaching or exceeding human actors operating in the same experimental environment.
This provided an example of “behavioral irrelevance” (that is, a situation where the observed social outcomes were not dependent on the rationality of agents), but this situation is not universal. Cliff and Bruten (1997a, 1997b) examined the sensitivity of Gode and Sunder’s findings to different supply and demand elasticities and found that convergence depended on smooth supply and demand; to deal with discontinuities, some algorithmic improvements were necessary, which led to the new concept of “zero-intelligence plus” agents. Regarding learning processes, Brian Arthur (1991, 1993) was among the first economists to suggest modeling agent behavior using reinforcement-type learning algorithms and to calibrate the parameters of such learning models using data from human subject experiments.
In this century, this line of work has kept its vitality, especially as a reaction to the perceived failure of neoclassical economics to both predict and explain the Great Recession. Two literature reviews (Arifovic & Duffy, 2018; Hommes, 2021) show that the link between experimental and agent-based economics is still a main tool for both disciplines.
Arifovic and Duffy (2018) describe the main classes of problems studied by agentic experimental economics: First, the “public good provision” problem is considered, where the initial high contributions to a public good decline toward convergence to the Nash equilibrium of zero contributions. Arifovic and Ledyard (2012)’s study is a key reference in this literature, and it has been selected as our first example in this paper that fits in the workflow for agentic experimental economics (see Section 5). A second problem is that of expectation coordination, which includes the study of “bank runs” as an important case. Arifovic et al. (2013) estimated a behavioral model that explains the outcomes of an experimental version of the Diamond and Dybvig model (Diamond & Dybvig, 1983). There is also a tradition of experimental work in the “life cycle savings–consumption” problem, where agents are given an exogenous income to allocate between consumption (concave utility in each period is induced by the experimental design) and remunerated savings. Even in very simple settings (Duffy & Li, 2019), large deviations from optimality are found. Often, this happens because only some participants understand the nature of the problem, while others (Carbone & Hey, 2004) follow misguided intuitions.
Finally, the most extensive class of agentic experiments are those related to expectation formation, both in the simple situation where there is no feedback between forecasts and outcomes, and the case where expectations influence outcomes, which includes the entire field of artificial markets. While this literature is covered by Arifovic and Duffy, we follow Hommes’s review, which is more specific to this problem. Hommes’s pioneering work (Brock et al., 2005) found “expectation coordination” around a few very simplified strategies that fed large and sustained price oscillations around the fundamental value.
Consequently, he summarizes the results saying that “macro systems with strong positive feedback (i.e., near–unit root systems) typically exhibit coordination failures in the lab”. In the opposite case, where there is negative feedback between prices and price expectations (e.g., commodity markets, where high price expectations lead to higher supply and then lower prices), experiments confirm convergence to rational expectations. For the positive feedback loop (financial asset markets), Hommes concludes that “A collection of individuals does not coordinate on the perfect rational equilibrium, even when it is unique, but rather coordinates on almost self-fulfilling equilibria characterized by correlated trend-following behavior and booms and bust price fluctuations around the rational fundamental benchmark”. To rationalize these results, Anufriev et al. (2013) identified a finite number of forecasting heuristics and calibrated a switching mechanism that could replicate price dynamics. The article has been selected as our second example in this paper that fits in the workflow for agentic experimental economics (Section 5).
Since Hommes’s review in 2021, we have identified several results in agentic experimental economics. In Evans et al. (2022)’s study, an experiment induces both short and long investment horizons in the participants of an artificial market, and “Prices in markets populated by only short-horizon forecasters fail to converge to the REE, with large and prolonged deviations from fundamentals. By contrast, in line with our theoretical predictions, we find that even a relatively modest share of long-horizon forecasters is sufficient to induce convergence toward the REE.
Anufriev et al. (2025) introduce the IEL-CDA model, which offers a more accurate representation of behavior in continuous double auctions (CDAs) compared with previous models such as zero Intelligence (ZI) and individual evolutionary learning (IEL). It achieves this by integrating elements of both IEL and a more sophisticated approach to hypothetical reasoning based on past transaction data. The article was included in the “Special issue on computational and experimental economics in memory of Jasmina Arifovic” of the Journal of Economic Dynamics and Control, which provides a recent perspective on the state of the art of the intersection between agent-based and experimental economics.

4. A Workflow for Agentic Experimental Economics

The discipline of experimental economics was created to put economic models “to the test”. In the typical experimental setup, the conditions of a relevant economic game are implemented in a computer platform (Smith, 1994), and the human participants are given economic rewards mimicking those of the situation of interest.
Theorems are a priori truths that cannot be tested. If all hypotheses hold, the conclusion is inevitable. Then, in most economic experiments, any discrepancy between the game-theoretical “expected” results and the experimental outcomes is necessarily the result of behavioral anomalies and their social amplification (Frey & Gallus, 2014), precisely because the experiment is designed to avoid any other difference with respect to the hypotheses of the economic theory under consideration. In particular, the experimenter often fixes the experimental rewards, that is, there is limited room for expressing preference heterogeneity. Moreover, the behavioral heterogeneity derives from limitations in instrumental rationality, i.e., cognitive biases and shortcomings, and sometimes cultural attitudes that contradict the monetary incentives typical in economic experimentation.
If behavioral anomalies are the only possible explanation for experimental discrepancies, the natural aim of experimental economics is precisely to characterize its behavioral cause. This implies, first, the construction of a model of behavioral reaction, then, the analysis of the effect of said behavioral model on aggregate outcomes. In this workflow, which is aligned with the best practices observed in the literature, I propose four steps. The first three steps allow for the construction of a generative model of a population of human-like players. In the final (validation) step, the artificial population thus calibrated plays the game, and the outcomes are compared with those of the human experiment. In Section 3, the four steps are illustrated with the two examples singled out in the previous section.
  • Propose a “baseline” behavioral benchmark
In game theory and neoclassical economics, the baseline behavior is obtained by optimization of an (often parameterized) utility function. This baseline (optimal) behavior can be obtained using analytical methods (e.g., the Nash equilibrium) or reinforcement learning (Mosavi et al., 2020) if the considered game is too complex for finding the equilibrium via analytical means (see Section 6 for a discussion of equilibrium computation with deep learning).
2.
Run the experiment with human subjects
First, information characterizing behavioral reactions and individual heterogeneity is collected. A first session of individual play to measure the skills and behavioral propensities (e.g., risk aversion) of each subject could be helpful to obtain as many relevant individual-level behavioral determinants as possible. Then, the game is played by human subjects, and all the relevant information about individual behavior and emergent outcomes is collected. We do not further elaborate here, because this step is simply the “experiment” in experimental economics.
3.
Use the experimental results to fit behavioral models, capturing the (probably heterogeneous) structure of behavioral bias
For the realistic replication of human behavior, strict optimality is relaxed parametrically, adding “bias” terms to the baseline models in step 1.
To this date, the main successes in the behavioral explanation of experimental datasets have been based on individual evolutionary learning (IEL), where, first, a finite number of simple heuristics are identified and then used in a discrete choice model, where the probability of changing the current heuristic in the next period is proportional to its historical performance in terms of utility (see Section 5). The IEL is a type of reinforcement learning algorithm where the existence of various strategies and the probability of transition among them allows one to balance exploration and exploitation. This kind of model is too multidimensional for maximum-likelihood estimation, and direct exploration of the parametric space, with the minimization of some type of mean square error (MSE), is the usual parametric estimation technique.
Dataset modeling is an open process, and the already mentioned entire repertoire of structural econometrics (Bellemare, 2023; Conte & Moffatt, 2014) could be used, while IEL has proven especially successful, probably because cognitive limitations lead to a strategy of heuristic simplification by the participants in most experimental settings.
4.
Validation: Re-run the experiment with artificial agents and compare outcomes between the artificial and human experiments
The social amplification of behavioral anomalies is the key rationale for the need for behavioral economics. Consequently, once econometric models of individual behavior have been estimated, a population of artificial agents with human biases can be used for the artificial simulation of the game (pseudo-experiment).
Both individual behavior (e.g., individual demand in experiments about economic equilibrium) and aggregate outcomes (e.g., prices in experiments about economic equilibrium) are compared between the pseudo-experiment and the real (human subject) experiment.
If differences between experimental and pseudo-experimental results are acceptable, well-calibrated “golems” are produced, and the “human” version of the game can be considered solved. If differences are not acceptable, new models of individual behavior are tried, or some hidden variables are affecting behavior. If this is the case, the experimental design will probably be modified to collect other relevant information.

5. The Workflow Applied

In this section, the two selected papers are described, and it is shown that they are aligned with the workflow.
In the voluntary contributions mechanism (VCM), each participant receives an endowment and must decide how much to contribute to a public good. The total contributions are then multiplied by a factor and redistributed equally among all participants, regardless of their individual contributions. When the multiplicative factor is lower than the number of players, the “benchmark” rational model (step 1) is defined by the “zero contribution” Nash equilibrium (this game works as a generalized “prisoner’s dilemma”). In the version considered here, the mechanism is repeatedly played, and contributions to the public good begin at around 40% and often decline monotonically in successive rounds.
This environment is affected by a classical problem in experimental settings: often, the experimenter cannot fully induce the utility function of her interest. No matter what the narrow monetary interest induced in the players is, their real motivation often includes extra-monetary determinants (in this case, both pure altruism and a feeling of fairness toward the other participants).
The “human experiment” (step 2) is not run by the authors themselves: the VCM is a standardized experimental environment, and the authors rely on older experimental datasets: their calibration sample is based on Isaac and Walker (1988), but other datasets (Andreoni, 1988, 1995; Isaac et al., 1994; Andreoni & Croson, 2008) are used in the validation step.
The estimation of the individual-level behavioral model (step 3) is performed by the econometric combination of non-individualist preferences with evolutionary learning. Regarding the preferences, the utility is supposed to be the sum of three terms (with different weights by player) representing personal income (individualism), average group income (altruism), and a “fairness term”, where individual and group average contributions are compared. This utility function is named “other-regarding preferences” (ORPs). Given the ORPs and the structure of the VCM, a game is defined. The IEL model is used to analyze how participants engage with this game, with a set of strategies equal to the possible contributions.
The ORP parameters of the population are a probability of being purely egoistic (P), and if not, the pure altruism and fairness parameters are extracted from a distribution (with two additional parameters, named B and G). The loss function (normalized MSE) is the squared difference of the average contribution by treatment predicted by the model and observed in the dataset, divided by twice the number of treatments.
Apart from the parameters determining the heterogeneous utility functions (ORP), the learning process also has its own set of parameters, that define computational and memory capacity and the “exploration vs. exploitation” trade-off. A fascinating result is that if these parameters are in a reasonable range, they almost do not affect either the ORP parameters or the fit measure (normalized MSE). IEL works by considering a finite number (J) of possible alternatives that can be loosely thought of as a measure of the processing and/or memory capacity of the agent. The authors find diminishing returns to increases in J. If J is small it takes a long time to learn, but as J increases improvements occur only up to a point. After that having more options under consideration provides no benefits to the agent. This result suggests a case of “cognitive irrelevance” (a milder version of the market efficiency of “zero intelligence” traders commented on in Section 3).
The Isaac and Walker (1988) dataset was used to calibrate all parameters in the utility functions and the transitions of the ILM, but the authors recognize that the model’s econometric complexity makes direct statistical validation of the original data challenging.
Fortunately, the model calibrated in the Isaac and Walker (1988) dataset can be used to predict results in the related experimental environments already mentioned: this allows for the study of “model transferability”, an especially powerful kind of “out-of-sample” validation (step 4). This paper shows that an artificial population of agents that play following the ORP-ILM behavioral model calibrated in an experimental dataset replicates most of the outcomes in the other experimental environments.
2.
Asset price expectations in positive and negative feedback markets: Anufriev et al. (2013)
This paper builds upon the learning-to-forecast experiment of Heemeijer et al. (2009). The experiment is designed in such a way that agents are required to produce forecasts of the value of a good whose fundamental value is 60 (that is, the rational expectations benchmark, step 1). The market value of the said good is a function of the average forecast. In each round, there are 50 forecasting periods. There are only two treatments. In the negative feedback treatment, the relation between the average forecast ( p t e ¯ ) and the market price is
p t = 60 20 21 ( p t e ¯ 60 ) + ε t
Meanwhile, in the positive feedback treatment,
p t = 60 + 20 21 ( p t e ¯ 60 ) + ε t
Each period’s payoff is inversely proportional to the forecasting error. Participants are paid according to accumulated rewards, with the reward per period computed (in euros) as
e i , t = 1 2 m a x 0 , 1 p t p i , t e 7 2
For negative feedback, there is convergence toward the fundamental value, while for positive feedback, large price oscillations are sustained. To calibrate a generative model of forecasting that fits the experimental data (step 3), the authors consider two heuristic forecasting rules ( h { 1,2 } ): First the “adaptative heuristic”, where each player expectation of the market price in the next period depends on the market price of today and the player’s expectations about the price of today:
p t e = w p t + ( 1 w ) p t e
And second, the “trend heuristic” where the expectation for the price in the next period depends on the last price plus γ times the last price change:
p t e = p t + γ ( p t p t 1 )
Then, the IEL methodology is applied in the “heuristic switching model”: in each period, a measure of each heuristic’s performance ( U h , t ), given the forecasting of rule h in t ( p h , t e ), is constructed:
U h , t = ( p t p h , t e ) 2 + η U t , t 1
In this model, the probability of changing the current heuristic toward the other is proportional to that heuristic’s performance (via a logit-like mechanism).
Unlike the two simple heuristics, the switching model reproduces the stylized facts of experimental data: “A parsimonious model where agents switch between the adaptive and the trend heuristics does well in explaining the most important characteristics of the price dynamics observed in the experiment. Consistently with the experiment, in the negative feedback market the simulations showed strong oscillations in the first periods followed by quick convergence towards the equilibrium price. In the positive feedback market the model exhibited persistent deviations from the equilibrium price and slow oscillations.
Once qualitative properties have been assessed, model fit is compared among the three candidate models. Parameters are chosen so that the MSE (here there is no normalization) of the observed one-period-ahead market price predicted by the model and that observed in the experimental dataset is minimized and given that the three models in competition have a different number of parameters, the Akaike and Bayesian information criteria are added to the MSE for homogeneous comparison. For the “negative feedback” treatment, both the trend heuristic and the heuristic switching model led to a similar fit, which is clearly superior to the trend heuristic. The heuristic switching model fits clearly better than the alternatives in the positive feedback experiment. When considering both treatments the switching model (the application of IEL) fit is clearly superior to the alternatives.

6. Research Perspectives for Agentic Experimental Economics in the Age of Artificial Intelligence

For decades, the hypothesis of optimizing behavior has put substantial limitations on the modeling freedom of economists because of the “curse of dimensionality”, that is, the fact that finding optima of functions becomes numerically very demanding even for a search space of a modest number of dimensions. To deal with this problem, there have been two roads: neoclassical economics has simplified the models (e.g., by populating economies with identical “representative agents”), while agent-based economics has often surrendered optimization. Deep reinforcement learning “tames” the curse of dimensionality (Fernández-Villaverde et al., 2024) and allows the modeler to populate complex artificial worlds with super-human optimizers. Multi-Agent Reinforcement Learning (Albrecht et al., 2024) is the interdisciplinary field that deals with multi-agent interaction, and it is substantially different from classic game theory: even relatively simple games played by two players can produce chaotic regimes (Galla & Farmer, 2013). De La Fuente and Casadellà (2024) discuss the challenges of reinforcement learning in dynamic environments, where agents are influenced by others’ actions, complicating gradient estimations needed for policy updates.
Deep learning techniques suggest an alternative to deal with games intractable for classic mathematical or computational (Savani & Turocy, 2025) game theory. For example, in the field of social choice, Casella (2005) introduced the storable votes (SV) voting mechanism, where participants in a sequence of elections are given additional votes in each period, so they can signal the intensity of their preferences, and avoid the minority disenfranchisement of majoritarian rules. Macías (2024) proposed a modification to deal with several problems, including strategic voting. The properties of the new mechanism were explored by the computation of a Stationary Nash Equilibrium (SNA), but only two-person games with a very limited number of storable votes were explored because of the computational demandingness of the computation of the SNA (see Elokda et al. (2024) for the theoretical properties of SNA in the similar case of “resource sharing”). Given the computational complexity of finding a Nash equilibrium (Conitzer & Sandholm, 2008) the curse of dimensionality is even a more serious limitation in Game Theory than in Economics and taming it can be transformative for the field.
Experiments are an alternative for dealing with intractable games, but it is of intrinsic economic interest to be able to identify to what extent the experimental results are driven by human suboptimality (that is why in our view, experiments without an optimal benchmark are of limited use). On the other hand, if superhuman artificial agents are available, this allows for the estimation of the “human gap”: by allowing a modest number of humans to interact in an experimental environment with many artificial players, the relative performance of humans in the considered game can be estimated.
This perspective can also be useful for the construction of “centaurs”, that is, support agents for human participants in a game. Haupt and Brynjolfsson (2025) argue that new methodological metrics for task performance are necessary to encourage human–machine systems that maximize their synergies.

7. Conclusions

Mainstream experimental economics is characterized by its focus on theory testing and “treatment effects” on aggregate outcomes. The “agentic” alternative is concerned with the econometric specification of individual behavior. In this study, first, a literature review of agentic experimental economics was provided. Furthermore, a stylized workflow was proposed to produce and validate an econometric estimation of individual behavior based on experimental data, detailed as follows: (i) create a baseline (“optimal”) behavioral benchmark (via analytical means or reinforcement learning) for the considered multi-agent game, (ii) conduct experiments with human subjects, (iii) use the experimental results to characterize the (heterogeneous) deviations from baseline behavior, and (iv) re-run the experiment with artificial agents calibrated in the previous step and compare the outcomes of the artificial and the human experiment. When the outcomes of the human experiment closely match those of the experiment conducted with calibrated artificial agents, we consider the “human version” of the multi-agent game solved.
To this date, the most successful econometric specifications for individual behavior have been based on individual evolutionary learning (IEL), where, first, a finite number of simple heuristics are identified and then used in a discrete choice model, where the probability of changing the current heuristic in the next period is proportional to its historical performance in terms of utility. This success could be related to cognitive limitations leading to a strategy of heuristic simplification by participants in most experimental settings.
Deep reinforcement learning tames the curse of dimensionality, enabling more advanced modeling in economics and game theory. Additionally, the availability of superhuman artificial agents creates benchmarks for human performance, revealing the “human gap” in experimental contexts. Finally, experimental methods can play a prominent role in integrating human and machine intelligence.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Albrecht, S. V., Christianos, F., & Schäfer, L. (2024). Multi-agent reinforcement learning: Foundations and modern approaches. MIT Press. [Google Scholar]
  2. Andreoni, J. (1988). Why free ride?: Strategies and learning in public goods experiments. Journal of Public Economics, 37(3), 291–304. [Google Scholar] [CrossRef]
  3. Andreoni, J. (1995). Cooperation in public-goods experiments: Kindness or confusion? The American Economic Review, 85(4), 891–904. [Google Scholar]
  4. Andreoni, J., & Croson, R. (2008). Partners versus strangers: Random rematching in public goods experiments. Handbook of Experimental Economics Results, 1, 776–783. [Google Scholar]
  5. Anufriev, M., Arifovic, J., Donmez, A., Ledyard, J., & Panchenko, V. (2025). IEL-CDA model: A more accurate theory of behavior in continuous double auctions. Journal of Economic Dynamics and Control, 172, 104840. [Google Scholar] [CrossRef]
  6. Anufriev, M., Hommes, C. H., & Philipse, R. H. (2013). Evolutionary selection of expectations in positive and negative feedback markets. Journal of Evolutionary Economics, 23(3), 663–688. [Google Scholar] [CrossRef]
  7. Arifovic, J., & Duffy, J. (2018). Heterogeneous agent modeling: Experimental evidence. In Handbook of computational economics (Vol. 4, pp. 491–540). Elsevier. [Google Scholar]
  8. Arifovic, J., Jiang, J. H., & Xu, Y. (2013). Experimental evidence of bank runs as pure coordination failures. Journal of Economic Dynamics and Control, 37(12), 2446–2465. [Google Scholar] [CrossRef]
  9. Arifovic, J., & Ledyard, J. (2012). Individual evolutionary learning, other-regarding preferences, and the voluntary contributions mechanism. Journal of Public Economics, 96(9–10), 808–823. [Google Scholar] [CrossRef]
  10. Arthur, W. B. (1991). Designing economic agents that act like human agents: A behavioral approach to bounded rationality. American Economic Review Papers and Proceedings, 81, 353–359. [Google Scholar]
  11. Arthur, W. B. (1993). On designing economic agents that behave like human agents. Journal of Evolutionary, 3(1), 1–22. [Google Scholar] [CrossRef]
  12. Bellemare, C. (2023). Estimation of structural models using experimental data from the lab and the field. Cambridge University Press. [Google Scholar]
  13. Brock, W. A., Hommes, C. H., & Wagener, F. O. (2005). Evolutionary dynamics in markets with many trader types. Journal of Mathematical Economics, 41(1–2), 7–42. [Google Scholar] [CrossRef]
  14. Carbone, E., & Hey, J. D. (2004). The effect of unemployment on consumption: An experimental analysis. The Economic Journal, 114(497), 660–683. [Google Scholar] [CrossRef]
  15. Casella, A. (2005). Storable votes. Games and Economic Behavior, 51(2), 391–419. [Google Scholar] [CrossRef]
  16. Cliff, D., & Bruten, J. (1997a). Minimal-intelligence agents for bargaining behaviors in market-based environments (Technical report HP-97-91). Hewlett–Packard Research Labs.
  17. Cliff, D., & Bruten, J. (1997b). More than zero intelligence needed for continuous double-auction trading (Hewlett Packard Laboratories Paper HPL-97-157). Hewlett–Packard Research Labs. [Google Scholar]
  18. Conitzer, V., & Sandholm, T. (2008). New complexity results about Nash equilibria. Games and Economic Behavior, 63(2), 621–641. [Google Scholar] [CrossRef]
  19. Conte, A., & Moffatt, P. G. (2014). The econometric modelling of social preferences. Theory and Decision, 76, 119–145. [Google Scholar] [CrossRef]
  20. Czibor, E., Jimenez-Gomez, D., & List, J. A. (2019). The dozen things experimental economists should do (More of). Southern Economic Journal, 86(2), 371–432. [Google Scholar] [CrossRef]
  21. De La Fuente, N., & Casadellà, G. (2024). Game theory and multi-agent reinforcement learning: From Nash equilibria to evolutionary dynamics. arXiv, arXiv:2412.20523. [Google Scholar]
  22. Diamond, D. W., & Dybvig, P. H. (1983). Bank runs, deposit insurance, and liquidity. Journal of Political Economy, 91(3), 401–419. [Google Scholar] [CrossRef]
  23. Duffy, J. (2006). Chapter 19: Agent-based models and human subject experiments. In Handbook of computational economics. Elsevier. [Google Scholar]
  24. Duffy, J., & Li, Y. (2019). Lifecycle consumption under different income profiles: Evidence and theory. Journal of Economic Dynamics and Control, 104, 74–94. [Google Scholar] [CrossRef]
  25. Elokda, E., Bolognani, S., Censi, A., Dörfler, F., & Frazzoli, E. (2024). A self-contained karma economy for the dynamic allocation of common resources. Dynamic Games and Applications, 14(3), 578–610. [Google Scholar] [CrossRef]
  26. Evans, G. W., Hommes, C., McGough, B., & Salle, I. (2022). Are long-horizon expectations (de-) stabilizing? Theory and experiments. Journal of Monetary Economics, 132, 44–63. [Google Scholar] [CrossRef]
  27. Fernández-Villaverde, J., Nuño, G., & Perla, J. (2024). Taming the curse of dimensionality: Quantitative economics with deep learning (No. w33117). National Bureau of Economic Research. [Google Scholar]
  28. Fourcade, M., Etienne, O., & Algan, Y. (2015). The superiority of economists. Journal of Economic Perspectives, 29(1), 89–114. [Google Scholar] [CrossRef]
  29. Frey, B. S., & Gallus, J. (2014). Aggregate effects of behavioral anomalies: A new research area. Economics, 8(1). [Google Scholar] [CrossRef]
  30. Fréchette, G. R., & Schotter, A. (2015). The external validity of laboratory experiments: The misleading emphasis on quantitative effects. In Handbook of experimental economic methodology (pp. 391–406). Oxford University Press. [Google Scholar]
  31. Galla, T., & Farmer, J. D. (2013). Complex dynamics in learning complicated games. Proceedings of the National Academy of Sciences, 110(4), 1232–1236. [Google Scholar] [CrossRef] [PubMed]
  32. Gode, D. K., & Sunder, S. (1993). Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality. Journal of Political Economy, 101(1), 119–137. [Google Scholar] [CrossRef]
  33. Haupt, A., & Brynjolfsson, E. (2025, July 13–19). Position: AI should not be an imitation game: Centaur evaluations. Forty-Second International Conference on Machine Learning Position Paper Track, Vancouver, BC, Canada. [Google Scholar]
  34. Heckbert, S. (2009, July 13–17). Experimental economics and agent-based models. 18th World IMACS/MODSIM Congress, Cairns, Australia. [Google Scholar]
  35. Heemeijer, P., Hommes, C., Sonnemans, J., & Tuinstra, J. (2009). Price stability and volatility in markets with positive and negative expectations feedback: An experimental investigation. Journal of Economic Dynamics and Control, 33(5), 1052–1072. [Google Scholar] [CrossRef]
  36. Hommes, C. (2021). Behavioral and experimental macroeconomics and policy analysis: A complex systems approach. Journal of Economic Literature, 59(1), 149–219. [Google Scholar] [CrossRef]
  37. Isaac, R. M., & Walker, J. M. (1988). Group size effects in public goods provision: The voluntary contributions mechanism. The Quarterly Journal of Economics, 103(1), 179–199. [Google Scholar] [CrossRef]
  38. Isaac, R. M., Walker, J. M., & Williams, A. W. (1994). Group size and the voluntary provision of public goods: Experimental evidence utilizing large groups. Journal of Public Economics, 54(1), 1–36. [Google Scholar] [CrossRef]
  39. Keane, M. P. (2010). A structural perspective on the experimentalist school. Journal of Economic Perspectives, 24(2), 47–58. [Google Scholar] [CrossRef]
  40. Kremer, M. (2020). Experimentation, innovation, and economics. American Economic Review, 110(7), 1974–1994. [Google Scholar] [CrossRef]
  41. Lazear, E. P. (2000). Economic Imperialism. The Quarterly Journal of Economics, 115(1), 99–146. [Google Scholar] [CrossRef]
  42. Lengnick, M. (2013). Agent-based macroeconomics: A baseline model. Journal of Economic Behavior & Organization, 86, 102–120. [Google Scholar] [CrossRef]
  43. Macías, A. (2024). Storable votes with a “pay as you win” mechanism. Journal of Economic Interaction and Coordination, 19(1), 121–150. [Google Scholar] [CrossRef]
  44. McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and STAN (2nd ed.). Chapman & Hall/CRC Texts in Statistical Science. [Google Scholar]
  45. Mosavi, A., Faghan, Y., Ghamisi, P., Duan, P., Ardabili, S. F., Salwana, E., & Band, S. S. (2020). Comprehensive review of deep reinforcement learning methods and applications in economics. Mathematics, 8(10), 1640. [Google Scholar] [CrossRef]
  46. Myerson, R. (1999). Nash equilibrium and the history of economic theory. Journal of Economic Literature, 37(3), 1067–1082. [Google Scholar] [CrossRef]
  47. Reiley, D. (2015). The Lab and the Field: Empirical and Experimental Economics. In Handbook of experimental economic methodology (pp. 407–419). Oxford University Press. [Google Scholar]
  48. Ross, D. (2005). Economic theory and cognitive science: Microexplanation. MIT Press. [Google Scholar]
  49. Savani, R. L., & Turocy, T. L. (2025). Gambit: The package for computation in game theory, version 16.3.0. Available online: https://www.gambit-project.org (accessed on 2 September 2025).
  50. Schram, A. (2005). Artificiality: The tension between internal and external validity in economic experiments. Journal of Economic Methodology, 12(2), 225–237. [Google Scholar] [CrossRef]
  51. Smith, V. L. (1994). Economics in the laboratory. Journal of Economic Perspectives, 8(1), 113–131. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Macías, A. The Agentic Perspective in Experimental Economics. Games 2025, 16, 48. https://doi.org/10.3390/g16050048

AMA Style

Macías A. The Agentic Perspective in Experimental Economics. Games. 2025; 16(5):48. https://doi.org/10.3390/g16050048

Chicago/Turabian Style

Macías, Arturo. 2025. "The Agentic Perspective in Experimental Economics" Games 16, no. 5: 48. https://doi.org/10.3390/g16050048

APA Style

Macías, A. (2025). The Agentic Perspective in Experimental Economics. Games, 16(5), 48. https://doi.org/10.3390/g16050048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop