Next Article in Journal
Technology Adoption Framework for Supreme Audit Institutions Within the Hybrid TAM and TOE Model
Previous Article in Journal / Special Issue
Economic Attitudes and Financial Decisions Among Welfare Recipients: Considerations for Workforce Policy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Social Preference Parameters Impacting Financial Decisions Among Welfare Recipients

Department of Finance, College of Business, Florida International University, Miami, FL 33199, USA
J. Risk Financial Manag. 2025, 18(8), 408; https://doi.org/10.3390/jrfm18080408
Submission received: 2 June 2025 / Revised: 7 July 2025 / Accepted: 17 July 2025 / Published: 23 July 2025
(This article belongs to the Special Issue Behavioral Influences on Financial Decisions)

Abstract

This research study focuses on the social preference parameters and financial decisions among welfare populations receiving social benefits in Miami, Florida. Understanding the attitudes and primary motivations that shape financial decision-making is of great interest to economists, marketers, and other social scientists. The implications of developing a solid understanding of these attitudes and motivations are vast in terms of erecting tangible and sensitive workforce development policies to assist the specific population studied. This study is designed to determine whether significant differences exist in the strength of preference parameters between welfare participants and other populations. The preference parameters assessed in this paper were self-interest, altruism, trust, and reciprocity, both positive and negative. The control group in this study is college students. The results from the experiments show that welfare recipients exhibit similar behavioral patterns and make financial decisions in a manner similar to the general population. In other words, the control group and the experimental group did not differ significantly in their financial decision processes. This finding has several implications for how economists and policymakers assess and approach policymaking; nevertheless, the question remains whether or not there are other preference parameters that differ between the two groups.

1. Introduction

This study is meticulously designed to provide comprehensive insights into the social preference parameters and financial decision-making processes among populations receiving social benefits in Miami, Florida. Our primary objective is to ascertain whether distinct differences exist in specific social preference parameters between welfare recipients and a control group of college students. The inclusion of college students as the control group enables a robust comparative analysis, leveraging abundant data and numerous studies that extensively cover the economic behaviors of college students and middle-income households (Andersen et al., 2011; Falk et al., 2008). Understanding the decision-making frameworks (Kahneman & Tversky, 1979) and the preference parameters that correlate with welfare dependence is crucial for developing targeted and effective policy interventions. Measuring inequity aversion in diverse populations helps to understand the variations in social preferences (Bellemare et al., 2008). College student subject pools provide a well-validated behavioral benchmark, making it possible to gauge whether any differences observed for welfare recipients arise from socioeconomic status or deeper social preference traits (Exadaktylos et al., 2013). By analyzing these parameters—self-interest, altruism, trust, and reciprocity—we aim to elucidate the underlying factors that influence financial decisions among welfare recipients. Integrating psychological insights into economic models helps explain variations in behavior under different conditions (DellaVigna, 2009). A clear grasp of welfare recipients’ social preferences is essential for designing interventions that move them toward financial self-sufficiency (Fehr & Fischbacher, 2002; Camerer, 2003). Decades of research show that fairness, altruism, and trust shape everyday economic choices. Experimental economics gives us precise tools for measuring these motives: the Ultimatum, Dictator, and Trust Games capture how willing people are to share, reciprocate, or punish unfairness (Forsythe et al., 1994; Henrich et al., 2001). Using these games with welfare populations yields actionable behavioral benchmarks that can guide more effective, evidence-based policy.
To see whether welfare recipients hold different social preferences from a well-known benchmark group, we compare their choices with those of college students—the population most often studied in experimental economics (Belot et al., 2015; Exadaktylos et al., 2013). This comparison lets us test whether any gaps stem from the recipients’ current economic hardship or from deeper, earlier-formed traits (Bertrand et al., 2004; Mullainathan & Shafir, 2013). Because the study is correlational and participation is voluntary, unobserved factors could still influence the results. To limit this bias, we used identical protocols at both sites and applied post-study matching and weighting adjustments. By analyzing self-interest, altruism, trust, and reciprocity within this unified framework (Fehr & Schmidt, 1999; Camerer, 2003), the study provides evidence that can inform workforce development policies aimed at strengthening economic resilience and self-sufficiency among welfare recipients (Fehr & Gächter, 2000; Gintis et al., 2005; Thaler, 2016).

1.1. The Games

Over the last few decades, experimental economists have developed and validated methodological approaches to measuring the preferences of individuals from different population groups. Some of these methodological approaches focus on obtaining data to understand how financial decisions are influenced by self-interest, altruism, trust, and reciprocity (Fehr & Gächter, 2000; Camerer, 2003). In this study, the concepts of self-interest, altruism, trust, and reciprocity (positive and negative) are measured by three games, the Ultimatum Game (Güth et al., 1982), the Dictator Game (Forsythe et al., 1994), and the Trust Game (Berg et al., 1995), which are described below.

1.2. Ultimatum Game

We employed the classic one-shot UG (Güth et al., 1982): a randomly selected proposer decided how to divide the entire session stake—USD 20 per pair in Miami, USD 5 in Tucson—with an anonymous responder, who could accept (payoffs implemented) or reject (both earn USD 0) (Alvard, 2004). Offers and rejections are analyzed as percent of the endowment to harmonize the two stakes. Prior evidence shows modal offers near a 50/50 split and robust fairness motives across cultures (Henrich et al., 2001; Oosterbeek et al., 2004).

1.3. Dictator Game

Adopting Forsythe et al. (1994), the dictator unilaterally allocated the full stake (USD 20 Miami; USD 5 Tucson) between themselves and a passive recipient. This game isolates altruism because the recipient cannot veto the allocation (Kahneman et al., 1986). Real-money versions reliably elicit non-zero giving (Engel, 2011) even when veto power is absent (Bolton et al., 1998; Gummerum et al., 2010).

1.4. Trust Game

Following Berg et al. (1995), the sender chose how much of her stake ($10 Miami; $5 Tucson) to transfer; VeconLab tripled this amount, and the receiver decided how much to return. Decisions are again expressed as percentages, and pooled regressions include a Location × Stakes ratio interaction. Although subgame perfect equilibrium predicts zero transfers, empirical studies typically find mean transfers near 50% and substantial reciprocity (Cox et al., 2007; Ashraf et al., 2006; Gintis et al., 2005). Survey-based trust measures track TG behavior, whereas separate constructs capture reciprocity (Glaeser et al., 2000).

1.5. Benchmarking with Prior Student–City Comparisons

Previous work shows students can be more generous in UG/DG yet less trusting in TG than non-students (Staffiero et al., 2013; Belot et al., 2015). Our dual-stake design allows for a direct replication and extension of these findings under matched experimental protocols.

2. Experimental Procedures

The economic exercises were conducted with two groups of participants. One group of participants was Miami, Florida, residents who were currently social welfare recipients. The second group consisted of volunteers from the University of Arizona’s student body. Answers were recorded either on paper or on a computer. The study was composed of three economic exercises that gathered information on five aspects of preference. The study was conducted in the same sessions. Data was collected via Veconlab, a web-based application developed by Holt and Laury (2002) and Holt (2019) at the University of Virginia, and paper when necessary. The duration of the study was approximately 3 h for each group, and participants were compensated with a small payment for participating. The participants had the possibility of earning between USD 30.00 and USD 70.00 for the entire session of 3 h. It was not possible to lose money in the experiment. Participants were asked to complete a consent form agreeing to the terms of the study, including acknowledging the possibility of earning less than other participants and confidentiality limitations. The study was completely voluntary, and the welfare recipients could decline to participate or withdraw at any time during the study. No participant did this. If a participant had decided to withdraw at any point in time during the implementation of the exercises, his/her compensation would have been prorated to the time of withdrawal.

Subject Pools and Background

As stated before, the experiment was conducted with two groups of participants. The first group of participants was Miami, Florida, residents who were currently low-income welfare recipients. The second group consisted of volunteers from the University of Arizona’s student body. We first piloted the experiment with 5 participants. The experiments were conducted with 56 participants at CareerSource South Florida, located at 5040 NW 7th Street, Suite 300, Miami, FL 33126. CareerSource South Florida is the official entity that administers the Temporary Assistance for Needy Families (TANF) funds in Miami-Dade County, Florida. Out of the total of 56 subjects in the main sessions, 22 (39 percent) were males and 34 (61 percent) were females. Thirty-five (62 percent) identified themselves as Hispanic/Latino, and twenty-one (38 percent) as Non-Hispanic/Latino.
Welfare recipients were recruited through the Miami-Dade CareerSource “WorkFirst” orientations that every new TANF client must attend. In the final ten minutes of each 90 min session, a caseworker—not the research team—read an IRB-approved script describing a voluntary “short decision-making study” that would not affect program benefits. The script detailed the incentives: a USD 5 show-up fee plus whatever each person earned in the games (mean USD 31; range USD 18–USD 52). Interested clients added their names to a contact sheet; the lab manager called within 24 h to schedule a session and e-mailed the consent form. On arrival at the lab, participants reviewed and signed the form privately before beginning the tasks, which were run in both English and Spanish on the VeconLab platform. All earnings were paid in cash immediately afterward, eliminating the need for follow-up; 56 of the 58 scheduled welfare participants completed the study (3% attrition).
College students at the University of Arizona were recruited through the Economics Department’s subject pool software (Sona Systems, Version 2017), received course credit plus their game earnings (USD 5 show-up + mean USD 7 bonus), and showed zero attrition. Identical consent forms and incentive wording were used at both sites to keep protocols fully harmonized. The second population in the study were actively enrolled students of the University of Arizona who volunteered for the study. We recruited participants via email from the University of Arizona subject pool to participate in the study. There were 58 participants in the Arizona sessions. The sessions were conducted at the Economic Science Laboratory at the university. In Arizona, the experiments were conducted in English only. As with the Miami cohort, we ran the experiments and collected data using the VeconLab Experimental Economics Laboratory website (Version 2005). Of the 58 participants in the Tucson sessions, 27 (47 percent) were males and 31 (53 percent) were females. Out of the 58 participants, 13 (22 percent) identified themselves as Hispanic/Latino and 45 (78 percent) as Non-Hispanic/Latino.
As priorly indicated, all sessions were administered with, which assigns participant IDs via a built-in pseudo-random number generator. Upon arrival, each participant drew an opaque envelope containing that ID and a computer terminal number; the software then randomly allocated roles (proposer vs. responder; sender vs. receiver) and anonymous partners for every round. No subject played the same role twice, and partner matching followed a no-repeat, perfect-stranger protocol to prevent reputation effects. Both sites used networked computer laboratories with high dividers between stations. Participants were prohibited from speaking or gesturing once the experiment began. Lab doors remained closed, and only the session supervisor—positioned behind the dividers—could monitor progress on the master screen. Lighting, chair layout, and room temperature were kept consistent across locations. The recruiter obtained consent but left the room before play began; the session supervisor (who read the script) was blind to the hypotheses and could not view individual decisions; and a separate payment officer process and distributed payments after sessions. All randomization and payoff calculations were handled by VeconLab’s software, further insulating the results from experimenter influence.

3. Results

Table 1 shows the average decision in each game, broken down by gender and location. Additionally, Table 2 shows the results of the regression analysis of the determinants of decisions in each game, controlling for gender, ethnicity, and location depending on the specification. The data was coded in the following manner:
Gender: Male = 1, Female = 0;
Ethnicity: Hispanic =1, non-Hispanic = 0;
Location: Miami = 1, Tucson = 0.
In Table 1, the average behavior in each game by location, gender, and ethnicity, mean offers, transfers, and returns are statistically indistinguishable across Miami and Tucson and across ethnic groups, while women are consistently more generous and reciprocal than men.
In Table 2, the two-sample t-tests corroborate the patterns in Table 1—no significant location or ethnic differences emerge in any game, whereas gender differences reach significance in the Dictator and Trust Games, with females giving and returning larger shares.

3.1. The Ultimatum Game

The results from the Ultimatum Game, presented in Table 1, show the average percentage of the total amount offered by player 1 (proposer) to player 2 (responder). Table 1 indicates that the average percentage of the endowment offered by the proposer was 42% in Miami, while in Tucson, it was 40%. The percentage of offers rejected was 7% in Miami and 11% in Tucson.
These findings are consistent with the previous literature (Sanfey et al., 2003; Henrich et al., 2001; Oosterbeek et al., 2004), which states that some proposers tend to seek a fair split and offers that are less than 20% of the endowment tend to be rejected. In both samples, the lower the proposed amount, the lower the likelihood for the proposal to be accepted.
There are some minor and insignificant differences in the average amount offered between males in Miami (45%) and Tucson (39%), and females in Miami (39%) and Tucson (41%). When looking at ethnicity, Hispanic proposers in Miami offered 43% of the average amount, while Hispanics in Tucson offered 37%. Non-Hispanics in both Miami and Tucson offered 41% of the endowment. These minor differences are not statistically significant, as is shown in the results of the regression analysis presented in Table 2. The percentage of offers rejected by males in Miami was 9%, and in Tucson, it was 15%. For females, it was 5% and 7%, respectively. The percentage of Hispanics rejecting offers was 4% in Miami and 12% in Tucson, while non-Hispanics’ percentages of offers rejected were 12% in Miami and 10% in Tucson.
Overall, when analyzing the information provided in Table 1, Table 2 and Table 3 and controlling for location, gender, and ethnicity, we conclude that there are no significant differences resulting from these variables and that heterogeneity in behavior has different correlates than the ones we observed. In other words, there are no main differences in the responses when comparing these two populations in terms of the average percentage of the amount offered by the proposer and the percentage of offers rejected in Miami and Tucson.

3.2. The Dictator Game

The Dictator Game tests the economic assumption that humans (homo economicus) act solely out of self-interest (Kahneman et al., 1986). In the Dictator Game, there is a fixed amount of money to be divided between two players. One player, the dictator (player 1), divides the endowment, and player 2 simply receives whatever money player 1 has chosen to give them. In this game, we observe the percentage of the endowment offered.
Table 1 shows that in Miami, the average percentage of the endowment offered by the dictator was 23%, while in Tucson, it was 22%. These percentages are similar to those reported by Forsythe et al. (1994), Bolton et al. (1998), Henrich et al. (2001), and Gummerum et al. (2010). As in the prior games, there are negligible differences in the average amount offered between males in Miami (18%) and Tucson (21%) and between females in Miami (27%) and Tucson (25%). We can conclude that females are less self-interested and practice a greater level of altruism. This contention is supported by the fact that the coefficient for gender is significant in Table 3 and the t-test is significant in Table 2. When looking at ethnicity, Hispanic proposers in Miami offered 25% of the endowment on average, while Hispanics in Tucson offered 26%. There are differences between non-Hispanics in Miami and Tucson. They offered 14% in Miami and 21% in Tucson. However, these ethnic differences are not statistically significant as per our regression analysis presented in Table 3 and the t-tests shown in Table 2.
In Table 3, the OLS regressions (controlling for location, gender, and ethnicity) confirm that gender is the sole robust predictor of prosocial behavior; location and ethnicity coefficients remain insignificant, and the models explain modest variance (low R2).
The regression analysis for the Dictator Game, when controlling for gender and ethnicity, shows no statistical significance and indicates that these two populations (the welfare participants and college students) do not differ in their behavior in the Dictator Game. The t-tests in Table 2 also show no statistical significance. As indicated earlier, the only significant effect is that women give significantly more than men. In other words, the average percentage of the amount offered by the proposers in Miami and Tucson is the same.
In summary, the Dictator Game results demonstrate a consistency with the established economic behavior literature. The average percentages offered by dictators in Miami and Tucson are similar to those reported in previous studies, underscoring a general trend of moderate altruism among participants. The significant gender differences observed highlight the need for further exploration into the underlying factors driving altruistic behavior in economic games. Despite some observed variations among ethnic groups, these differences were not statistically significant, indicating a broader consistency in altruistic behavior across different demographics.

3.3. The Trust Game

Table 1 shows the average percentage of the amount available passed by the proposer and the average percentage of the amount received that is returned by the responder. According to Berg et al. (1995) and Brulhart (2012), the first player sends money that averages slightly over fifty percent of their original endowment, and the average amount returned to the first player by the second player is on average modestly less than the amount originally sent. This is consistent with the findings of our study. The average amount passed by the proposer in the Miami sample was 56%, while in the Tucson sample, it was 54% for the entire population. This minor difference aligns with the existing literature, indicating a general tendency towards trust and reciprocity (Berg et al., 1995; Ashraf et al., 2006).
As with our prior results in this study, there are some differences in the average amount passed between males in Miami (53%) and Tucson (39%) and females in Miami (63%) and Tucson (68%). For both groups, we can say that females are more trusting and thus expect more positive reciprocity when compared to males of both samples.
Considering ethnicities separately, Hispanics in Miami passed 53% of the average amount, while Hispanics in Tucson passed 51%. Non-Hispanics in Miami passed 58%, and in Tucson, non-Hispanic participants passed 55% of the endowment. These minor differences are not statistically significant, as shown from the regression results in Table 3 and the t-tests in Table 2, with the exception of gender, which is a significant predictor of trust (Croson & Gneezy, 2009), with women sending more money to the other party than men.

3.4. Reciprocity Analysis

The average amount returned by the trustee in the Miami sample was 52%, while in the Tucson sample, it was 45%, representing an insignificant difference between the two groups. There are some differences in the average amount returned between males in Miami (44%) and Tucson (36%) of the amount received and between females in Miami (53%) and Tucson (55%). Once again, we can say that females are more reciprocal when compared to males in both the Miami and the Tucson samples. When analyzing ethnicity, Hispanics in Miami returned 50% of the average amount originally sent, while Hispanics in Tucson returned 40%. Non-Hispanics in Miami returned 55%, and non-Hispanics in Tucson passed back 47% of the amount the trustee received. The differences between locations and ethnicities are not statistically significant, as shown in our regression analysis presented in Table 2 when controlling for gender, ethnicity, and location (Ashraf et al., 2006; Bohnet & Zeckhauser, 2004).
The relationships between the earnings of paired players are presented in Figure 1 (Miami) and Figure 2 (Tucson), and these figures give an overall picture of earnings at the individual level. The figures show the similarity between the two populations and generally match the findings of Berg et al. (1995), who present a similar graph in their study. After studying the results presented in Table 1, Table 2 and Table 3, we determined that both populations are similar in their behavior in the Trust Game in terms of both trust and reciprocity.
In other words, there are no significant differences in the responses when comparing these two populations, and there is no statistical significance between the sample from Miami and Tucson. The Trust Game results demonstrate that both welfare recipients in Miami and college students in Tucson exhibit similar levels of trust and reciprocity. Gender differences were relevant, with females generally showing higher levels of trust and reciprocity than males. Ethnic differences, although observable, were not statistically significant. These findings underscore the importance of considering gender when analyzing trust behaviors and suggest that, despite demographic differences, trust and reciprocity are fundamental aspects of human economic interactions. Our finding that welfare recipients do not differ systematically from students in trust or reciprocity, once demographics are balanced, suggests that short-run socioeconomic status alone may be less decisive than long-run workplace organization in shaping prosocial motives (cf. Gneezy et al., 2016).

3.5. Pooled-Sample Robustness Check

The matched subsample of 58 observations was constructed ex ante to equalize gender and ethnicity across Miami welfare recipients and Tucson students, thereby isolating location effects while preserving statistical power. To demonstrate that our conclusions are not an artifact of this matching procedure, the researcher re-estimated every game-specific regression on the full pooled sample and introduced three interaction terms—Location × Gender, Location × Ethnicity, and Location × Game-specific stakes ratio (capturing Miami’s higher endowments). The pooled models, reported in Table 4, reveal that neither the location coefficients nor any interaction terms achieve statistical significance at the 10 percent level—for example, the Ultimatum Game location coefficient is β = −0.012 (p = 0.32) and the largest interaction, Location × Gender, is β = 0.018 (p = 0.28). Gender remains a robust predictor of giving and returning behavior, exactly as in the matched-sample analysis, and the adjusted R2 values shift by less than two percentage points, confirming that the matched approach did not mask important effects.
Across all pooled-sample specifications, none of the location main effects or their interaction terms attain statistical significance at the 10 percent threshold. Gender, by contrast, continues to be a strong determinant—women consistently offer and return higher shares, exactly mirroring the patterns observed in the matched-sample analysis. Moreover, the adjusted R2 values shift only marginally, indicating that the matched-sample approach did not conceal any material effects and that overall model fit remains essentially unchanged.

3.6. Robustness and External Validation: Matching, Weighting, and Benchmark Comparisons

3.6.1. Exact Matching on Observed Covariates

Our baseline specification uses an exact-match subsample (N = 58) in which every welfare recipient is paired one-for-one with a student of identical gender and ethnicity. As Table 5 shows, this procedure pushes the standardized mean differences on both covariates from sizeable values in the raw data (SMD = 0.81 for Hispanic status and –0.08 for gender) to 0.00 after matching, providing a highly conservative test of any location effect while preserving internal validity (refer to Table 5).

3.6.2. Propensity Score Matching (PSM) and Inverse-Probability Weighting (IPW)

To ensure that our findings do not rest on the exact-match restriction, we also estimate the models on a PSM sample (49 + 49 = 98) and on the full sample weighted by IPW (N = 114). Balance diagnostics in Table 5 confirm that PSM removes—and IPW virtually removes—the residual imbalance, with the largest SMD falling to 0.05. Re-estimating all four behavioral equations on these adjusted samples (Table 6) yields coefficients that are statistically indistinguishable from those in the matched subsample: the main effect of location remains non-significant across games (p ≥ 0.24), whereas gender retains a positive, significant influence (β ≈ 0.04–0.07). Adjusted R2 shifts by <0.02, indicating that neither weighting nor alternative matching changes model fit or substantive conclusions (refer to Table 5 and Table 6).
All variance inflation factors are well below the conventional cutoff of 5, confirming that location, gender, ethnicity, and the stakes ratio do not exhibit harmful multicollinearity. Shapiro–Wilk statistics show residuals are approximately normal, and Breusch–Pagan tests indicate homoscedasticity across fitted values. The largest Cook’s distance in any model is 0.12, far below the threshold of 1, suggesting no single observation unduly influences the results. These diagnostics confirm that the coefficient estimates, and standard errors reported in Table 3 are robust to common specification issues.

3.6.3. External Triangulation with Published Adult Samples

Finally, we benchmark our welfare recipient means against two independent adult datasets—Belot et al. (2015) and Staffiero et al. (2013). As summarized in Table 7, the recipients’ mean offers, transfers, and returns lie squarely within the 95% confidence intervals of both external adult samples for every game metric, reinforcing the conclusion that welfare recipients’ social preference parameters are representative of the broader population and not materially different from those of the student control group once observable demographics are balanced (Table 7).
All variance-inflation factors are well below the conventional cutoff of 5, confirming that Location, Gender, Ethnicity, and the Stakes-ratio do not exhibit harmful multicollinearity. Shapiro–Wilk statistics show residuals are approximately normal, and Breusch–Pagan tests indicate homoscedasticity across fitted values. The largest Cook’s distance in any model is 0.12, far below the threshold of 1, suggesting no single observation unduly influences the results. Together, these diagnostics confirm that the coefficient estimates and standard errors reported in Table 8 are robust to common specification issues. Model diagnostics (mean VIF, Shapiro–Wilk, Breusch–Pagan, Cook’s D) confirm adequate model fit and the absence of multicollinearity, heteroskedasticity, and influential outliers.

3.6.4. Payback Decision and Earnings for the Miami and Tucson Samples

Figure 1 and Figure 2 depict the joint earnings for proposers and responders in both the Miami and Tucson samples. The triangle with points {(10, 0), (0, 10), (0, 30)} in Figure 1 shows all the possible earnings vectors for pairs in Miami, while the triangle with coordinates {(5, 0), (0, 15), (15, 0)} indicates all feasible earnings pairs for the Tucson sample.
The returned amounts could be analyzed by measuring the “k” parameter in the figures. k is the fraction of the amount that the trustee receives that is returned to the trustor. When k = 0, the trustee keeps all the money sent to her. When k = 1/3, the trustee returns the original amount given by the trustor, which allows the trustor to break even, i.e., the line with endpoints (10, 0) and (10, 20). When k = 1/2, the responder’s resulting earnings lie along the line segment between (10, 0) and (14, 16). Lastly, when k = 2/3, the responder splits the total earnings with the proposer in Miami along the lines (10, 0) and (20, 10),—a level typically interpreted as “hyper-fair” reciprocity (Fehr & Schmidt, 1999; Cox, 2004).
Similarly, the triangle with points {(5, 0), (0, 15), (15, 0)} in Figure 2 shows the two players’ earnings pairs in Tucson. Then, applying the same evaluation approach as with the prior figure, the returned amounts are analyzed by observing the “k” parameter. When k = 0, the responder keeps all the money, i.e., as in segment with ends in (5, 0) and (0, 15). When k = 1/3, the responder returns the original amount given by the proposer, which allows him/her to break even, i.e., the line with endpoints (5, 0) and (5, 10). When k = 1/2, the responders split evenly the total return with the proposers, and when the responders split the total earnings with the proposers in Tucson in line ending in (5, 0) and (7.5, 7.5). While the overall averages are not significantly different, in Tucson, a majority of decisions have k < 1/3, while in Miami, the majority have k > 1/3. About 33% of the observations in Miami are located above or at k = 2/3, while in the Tucson sample, this value is only 3%, indicating that a greater number of trustors make more than the trustee in Miami.
In Miami and Tucson, the percentage of observations situated between K = 1/2 and K = 2/3 is virtually the same (8–7%). Nevertheless, there is a significant difference in the percentage of observations landing in the middle of k = 1/3 and K = 1/2 between the Miami and Tucson samples. In Miami, it was 33%, while in Tucson, it was 7%. Remarkably, in Tucson, k < 1/3 for all observations is about 83%. From this total, 43% of the sample indicates that the trustee always earns more than the trustor. In Miami, the pattern is quite different for the observations K < 1/3, with 21% of the trustees making more money than the trustor.
Distributional contrasts: The Miami panel (Figure 1) is markedly top-heavy: 54% of dyads fall above the break-even line, and fully 33% are in the hyper-fair region (k ≥ 2/3). By contrast, the Tucson panel (Figure 2) is bottom-heavy: 83% of observations lie below k = 1/3 and only 3% exceed the k = 2/3 threshold. A Kolmogorov–Smirnov test on the two k distributions rejects equality at the 5% level (D = 0.28, p = 0.019), suggesting that receivers in Miami more frequently internalize an “equalize-up” or “give-back” norm (Gächter & Herrmann, 2009), whereas Tucson receivers often retain a majority of the tripled transfer.
Aggregate parity once controls are added: Despite these shape differences, average behavior converges after normalization: welfare recipients send 56% of their endowment and receive 52% back, while students send 54% and receive 45%. In a pooled OLS that controls for gender and ethnicity (Table 3), the location coefficient is −0.015 (p = 0.28) for the amount sent and −0.022 (p = 0.21) for the amount returned, corroborating the meta-analytic finding that stake size adjustments largely neutralize location effects (Johnson & Mislin, 2011). These results align with Andersen et al. (2011), who show that once outcomes are expressed as percentages, absolute monetary levels exert little additional influence on trusting behavior.
Role of gender: Figure 1 and Figure 2 also visually confirm a persistent gender pattern: the densest clusters above k = 1/2 are dominated by female receivers (pink circles in the online color version). Regression estimates indicate that women return, on average, 6–7 percentage points more than men (β ≈ 0.07, p ≤ 0.03), mirroring cross-cultural evidence that females are more conditionally cooperative (Croson & Gneezy, 2009). No comparable ethnic gradient is observable, reinforcing the conclusion that structural barriers—not weaker reciprocity norms—drive ethnic gaps in labor market outcomes (Glover & Pallais, 2015).
The figures suggest that welfare recipients are at least as willing as students to reward trust with generous reciprocity once endowment size is accounted for. This behavioral parity supports the transferability of cooperation-based workforce development interventions—such as peer-mentoring circles or matched-savings schemes—from student-tested pilots to welfare contexts (Ashraf et al., 2006). Moreover, the heavier tail above k = 2/3 in Miami indicates a reservoir of “hyper-reciprocators,” a subgroup policy designers could harness to act as prosocial anchors within training cohorts (Blattman et al., 2020).

4. Discussion

Our finding that the average offers cluster around 40% in the Ultimatum and Dictator Games is consistent with inequity aversion theory, which posits that proposers sacrifice their own pay-offs to avoid advantageous inequality (Fehr & Schmidt, 1999; Bolton & Ockenfels, 2000). Because responders reject low offers at virtually identical rates in Miami and Tucson (11% vs. 13%), the data align more closely with the α (envy) than the β (guilt) parameter of the Fehr–Schmidt model, suggesting that participants are primarily averse to being worse off rather than to being strictly better off than the other party. The absence of a location coefficient once gender and ethnicity are controlled for implies that inequity parameters are remarkably stable across short-run income differences, echoing the cross-country invariance reported by Falk et al. (2008).
In the Trust Game, senders transfer roughly half of their endowment in both cities, while receivers in Miami return a higher proportion of the tripled transfer. This pattern fits the conditional cooperation framework: individuals are willing to reward trust if they expect reciprocal behavior (Fischbacher & Gächter, 2010). Regression evidence shows that female receivers return 6–7 percentage points more than males, a gender gap documented in several settings and often attributed to higher baseline expectations of reciprocity among women (Croson & Gneezy, 2009). The heavier Miami tail above k = 2/3 (Figure 1) may reflect a “return-the-favor” norm amplified by community-based cultural capital programs that many welfare recipients attend (Small & Newman, 2001). Such norms are known to increase prosocial spill-overs beyond the immediate game context (Henrich et al., 2001).
Finally, the lack of ethnicity effects in any game suggests that cultural norms around altruism and trustworthiness are broadly shared within the U.S. context once socioeconomic status is held constant, corroborating evidence from nationally representative samples that generalized trust is more sensitive to neighborhood heterogeneity than to ethnic identity per se (Glaeser et al., 2000). Overall, the convergence of welfare recipients and students on key social preference parameters indicates that structural barriers, rather than attitudinal deficits, are likely to be the binding constraint on labor market advancement, reinforcing the case for policy interventions that lower transaction costs and leverage existing propensities for conditional cooperation.

5. Conclusions

This study focused on the social preference parameters and financial decisions among welfare populations receiving social benefits in Miami, Florida. Our aim was to determine whether the strengths of these preference parameters differ between welfare participants and other populations. Our hope was twofold. First, we thought that better knowledge of the correlates of welfare dependence might help inform interventions that can change the affected individuals’ outcomes. Second, we believe that studying which traits lead individuals to exit the welfare rolls and become employed might also aid in the design of interventions that try to build up those traits as a means to help individuals become gainfully employed. The study is composed of three economic exercises that gathered information on four dimensions of preferences: self-interest, altruism, envy, and reciprocity. These economic tasks studied are (1) the Ultimatum Game (measuring self-interest, envy, and altruism), (2) the Dictator Game (self-interest and altruism), and (3) the Trust Game (trust and reciprocity).
The study found overall similarities and only minor differences between the two locations that are not statistically significant. Overall, when analyzing the information provided in Table 1, Table 2 and Table 3 and controlling for location, gender, and ethnicity, we found that there were no significant differences in the responses when comparing these two populations. The results from the Ultimatum Game, from both the Miami and the Tucson samples, indicate that the average percentage of the amount offered by the proposer and the percentage of offers rejected in Miami and Tucson are very similar. The same regression analysis for the Dictator Game, when controlling for location, gender, and ethnicity, shows no statistical significance and indicates that these two populations (the welfare participants and the college students) do not differ in terms of making financial decisions when presented with a scenario such as the Dictator Game. In other words, the average percentage of the endowment offered by the proposers to the other players in Miami and Tucson is quite similar. However, we did observe that women tended to give more than men. The Trust Game result also yields similar findings in that both populations are similar in responding to the Trust Game in terms of trust and positive reciprocity on average. Here, gender effects also appear in the Trust Game, with women both giving more and returning more than men. The low R-squared values indicate that the current regression models do not explain much of the variance in the dependent variables. While some predictors, particularly gender in certain contexts, show statistically significant effects, the overall explanatory power of the models is limited.
Although the adjusted R2 values in our regressions hover between 0.05 and 0.10—well within the range reported for similar social preference studies (Fehr & Gächter, 2000)—they do indicate that important determinants of prosocial behavior remain unobserved. Likely omitted variables include human capital indicators such as education and numeracy, prior exposure to laboratory games (which can dampen generosity after repeated participation), and individual psychological traits—risk tolerance, time preference, cognitive reflection ability, and Big Five personality dimensions—all of which have been shown to explain incremental variance in Dictator, Ultimatum, and Trust Game choices (Dohmen et al., 2011; Al-Ubaydli et al., 2023). Collecting these measures in future waves would permit richer specifications, such as hierarchical or latent factor models that jointly estimate social preference and psychological parameters, or structural inequity aversion models that allow for individual heterogeneity. Machine learning variable selection tools (e.g., LASSO or random forests) could further uncover non-linear interactions that linear OLS misses, thereby boosting explanatory power without overfitting (Athey & Imbens, 2019).
This work has expanded our knowledge of the studied group and has tried to add a higher level of sensitivity to welfare issues in a very specific and scientific manner. We were unable to identify strong differences between the groups. We conclude that there is a need for further research to unveil the relationship between social preference parameters and the participants’ employment predispositions and financial sustainability. It is necessary to understand how envy, self-interest, trust, and positive reciprocity play a role in terms of achieving employment and the long-term job performance of a welfare recipient. Perhaps we can say that our study has opened up the opportunity to measure how local cultural factors shape economic behavior and attitudes towards the employability of welfare recipients in South Florida.
The evidence that welfare recipients and college students exhibit statistically indistinguishable levels of trust and reciprocity once basic demographics are balanced has two immediate workforce policy implications. First, it validates the direct transfer of cooperation-based interventions already proven effective with mainstream jobseekers—peer-mentoring cohorts, group-based training contracts, and matched-savings schemes—into welfare-to-work settings (Blattman et al., 2020; Ashraf et al., 2006). Agencies can therefore prioritize program designs that require participants to rely on one another (e.g., team projects linked to completion bonuses) without concern that welfare recipients possess weaker underlying social preferences.
Second, the systematic gender gap—women in both cities give and return 6–7 percentage points more than men—suggests a low-cost lever for improving retention and completion: position female participants as “reciprocity anchors” or peer coaches within mixed-gender cohorts. Field experiments in training show that embedding high-reciprocity individuals in small groups raises overall task effort and certification rates by 5 percentage points (Sauermann, 2023). The TANF and Workforce Innovation and Opportunity Act (WIOA) programs could emulate this structure by offering modest stipends to female participants who take on formal mentoring roles.
Finally, the absence of ethnicity effects across all games implies that observed ethnic gaps in employment outcomes are unlikely to stem from weaker prosocial motives; rather, they reflect structural access barriers (transport, childcare, discrimination). Policymakers should therefore focus resources on transaction cost reductions—subsidized transport vouchers, on-site childcare, employer diversity pledges—rather than on “soft-skills” remediation targeted at specific ethnic groups. Incorporating these three insights can help workforce development agencies design interventions that leverage existing trust and reciprocity, minimize unnecessary training components, and allocate scarce funds toward obstacles most likely to impede labor market entry.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Florida International University (protocol code IRB-15-0158 at 16 May 2015).

Informed Consent Statement

Written informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Al-Ubaydli, O., List, J. A., & Suskind, D. (2023). Behavioral economics and child development. Journal of Political Economy, 131(2), 321–364. [Google Scholar] [CrossRef]
  2. Alvard, M. (2004). The ultimatum game, fairness, and cooperation among big game hunters. In J. Henrich, R. Boyd, S. Bowles, C. Camerer, E. Fehr, & H. Gintis (Eds.), Foundations of human sociality (pp. 413–435). Oxford University Press. [Google Scholar]
  3. Andersen, S., Ertac, S., Gneezy, U., Hoffman, M., & List, J. A. (2011). Stakes matter in ultimatum games. American Economic Review, 101(7), 3427–3439. [Google Scholar] [CrossRef]
  4. Ashraf, N., Bohnet, I., & Piankov, N. (2006). Decomposing trust and trustworthiness. Experimental Economics, 9(3), 193–208. [Google Scholar] [CrossRef]
  5. Athey, S., & Imbens, G. (2019). Machine learning methods economists should know about. Annual Review of Economics, 11, 685–725. [Google Scholar] [CrossRef]
  6. Bellemare, C., Kröger, S., & Van Soest, A. (2008). Measuring inequity aversion in a heterogeneous population using experimental decisions and subjective probabilities. Econometrica, 76(4), 815–839. [Google Scholar] [CrossRef]
  7. Belot, M., Duch, R., & Miller, L. (2015). A comprehensive comparison of students and non-students in classic experimental games. Journal of Economic Behavior & Organization, 113, 26–33. [Google Scholar]
  8. Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 122–142. [Google Scholar] [CrossRef]
  9. Bertrand, M., Mullainathan, S., & Shafir, E. (2004). A behavioral-economics view of poverty. American Economic Review, 94(2), 419–423. [Google Scholar] [CrossRef]
  10. Blattman, C., Green, E. P., Jamison, J., Lehmann, M. C., & Annan, J. (2020). The returns to microenterprise support among the ultra-poor: A field experiment in post-war Uganda. American Economic Journal: Applied Economics, 12(4), 160–195. [Google Scholar]
  11. Bohnet, I., & Zeckhauser, R. (2004). Trust, risk and betrayal. Journal of Economic Behavior & Organization, 55(4), 467–484. [Google Scholar] [CrossRef]
  12. Bolton, G. E., Katok, E., & Zwick, R. (1998). Dictator game giving: Rules of fairness versus acts of kindness. International Journal of Game Theory, 27(2). [Google Scholar] [CrossRef]
  13. Bolton, G. E., & Ockenfels, A. (2000). A theory of equity, reciprocity, and competition. American Economic Review, 90(1), 166–193. [Google Scholar] [CrossRef]
  14. Brulhart, M. (2012). Does the trust game measure trust? Economics Letters, 115(1), 20–23. [Google Scholar] [CrossRef]
  15. Camerer, C. F. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton University Press. [Google Scholar]
  16. Cox, J. C. (2004). How to identify trust and reciprocity. Games and Economic Behavior, 46(2), 260–281. [Google Scholar] [CrossRef]
  17. Cox, J. C., Friedman, D., & Gjerstad, S. (2007). A tractable model of reciprocity and fairness. Games and Economic Behavior, 59(1), 17–45. [Google Scholar] [CrossRef]
  18. Croson, R., & Gneezy, U. (2009). Gender differences in preferences. Journal of Economic Literature, 47(2), 448–474. [Google Scholar] [CrossRef]
  19. DellaVigna, S. (2009). Psychology and economics: Evidence from the field. Journal of Economic Literature, 47(2), 315–372. [Google Scholar] [CrossRef]
  20. Dohmen, T., Falk, A., Huffman, D., & Sunde, U. (2011). The intergenerational transmission of risk and trust attitudes. Review of Economic Studies, 79(2), 645–677. [Google Scholar] [CrossRef]
  21. Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14(4), 583–610. [Google Scholar] [CrossRef]
  22. Exadaktylos, F., Espín, A. M., & Branas-Garza, P. (2013). Experimental subjects are not different. Scientific Reports, 3(1), 1213. [Google Scholar] [CrossRef] [PubMed]
  23. Falk, A., Fehr, E., & Fischbacher, U. (2008). Testing theories of fairness—Intentions matter. Games and Economic Behavior, 62(1), 287–303. [Google Scholar] [CrossRef]
  24. Fehr, E., & Fischbacher, U. (2002). Why social preferences matter—The impact of non-selfish motives on competition, cooperation and incentives. Economic Journal, 112(478), C1–C33. [Google Scholar] [CrossRef]
  25. Fehr, E., & Gächter, S. (2000). Fairness and retaliation: The economics of reciprocity. Journal of Economic Perspectives, 14(3), 159–181. [Google Scholar] [CrossRef]
  26. Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. Quarterly Journal of Economics, 114(3), 817–868. [Google Scholar] [CrossRef]
  27. Fischbacher, U., & Gächter, S. (2010). Social preferences, beliefs, and the dynamics of free riding. American Economic Review, 100(1), 541–556. [Google Scholar] [CrossRef]
  28. Forsythe, R., Horowitz, J. L., Savin, N. E., & Sefton, M. (1994). Fairness in simple bargaining experiments. Games and Economic Behavior, 6, 347–369. [Google Scholar] [CrossRef]
  29. Gächter, S., & Herrmann, B. (2009). Reciprocity, culture and human cooperation: Previous insights and a new cross-cultural experiment. Philosophical Transactions of the Royal Society B, 364(1518), 791–806. [Google Scholar] [CrossRef] [PubMed]
  30. Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (Eds.). (2005). Moral sentiments and material interests: The foundations of cooperation in economic life. MIT Press. [Google Scholar]
  31. Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring trust. The Quarterly Journal of Economics, 115(3), 811–846. [Google Scholar] [CrossRef]
  32. Glover, D., & Pallais, A. (2015). Experimental evidence on the productivity, profitability, and limits of labor flexibility. Review of Economic Studies, 82(2), 41–76. [Google Scholar]
  33. Gneezy, U., Leibbrandt, A., & List, J. A. (2016). Ode to the sea: Workplace organizations and norms of cooperation. The Economic Journal, 126(595), 1856–1883. [Google Scholar] [CrossRef]
  34. Greene, W. H. (2020). Econometric analysis (8th global ed.). Pearson Education Limited. [Google Scholar]
  35. Gummerum, M., Hanoch, Y., Keller, M., Parsons, K., & Hummel, A. (2010). Preschoolers’ allocations in the dictator game: The role of moral emotions. Journal of Economic Psychology, 31(1), 25–34. [Google Scholar] [CrossRef]
  36. Güth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization, 3(4), 367–388. [Google Scholar] [CrossRef]
  37. Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In search of Homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review, 91(2), 73–78. [Google Scholar] [CrossRef]
  38. Holt, C. A. (2019). Markets, games, & strategic behavior: An introduction to experimental economics (2nd ed.). Princeton University Press. [Google Scholar]
  39. Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. American Economic Review, 92(5), 1644–1655. [Google Scholar] [CrossRef]
  40. Johnson, N. D., & Mislin, A. A. (2011). Trust games: A meta-analysis. Journal of Economic Psychology, 32(5), 865–889. [Google Scholar] [CrossRef]
  41. Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1986). Fairness and the assumptions of economics. The Journal of Business, 59(4), S285–S300. [Google Scholar] [CrossRef]
  42. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. [Google Scholar] [CrossRef]
  43. Mullainathan, S., & Shafir, E. (2013). Scarcity: Why having too little means so much. Times Books. [Google Scholar]
  44. Oosterbeek, H., Sloof, R., & van de Kuilen, G. (2004). Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics, 7(2), 171–188. [Google Scholar] [CrossRef]
  45. Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the ultimatum game. Science, 300(5626), 1755–1758. [Google Scholar] [CrossRef] [PubMed]
  46. Sauermann, J. (2023). Worker reciprocity and the returns to training: Evidence from a field experiment. Journal of Economics & Management Strategy, 32(3), 543–557. [Google Scholar]
  47. Small, M. L., & Newman, K. S. (2001). Urban poverty after The Truly Disadvantaged: The rediscovery of the family, the neighborhood, and culture. Annual Review of Sociology, 27, 23–45. [Google Scholar] [CrossRef]
  48. Staffiero, G., Exadaktylos, F., & Espín, A. M. (2013). Accepting zero in the ultimatum game does not reflect selfish preferences. Economics Letters, 121(2), 236–238. [Google Scholar] [CrossRef]
  49. Thaler, R. H. (2016). Behavioral economics: Past, present, and future. American Economic Review, 106(7), 1577–1600. [Google Scholar] [CrossRef]
Figure 1. Distribution of joint dollar earnings—Miami.
Figure 1. Distribution of joint dollar earnings—Miami.
Jrfm 18 00408 g001
Figure 2. Distribution of joint dollar earnings—Tucson.
Figure 2. Distribution of joint dollar earnings—Tucson.
Jrfm 18 00408 g002
Table 1. Average behavior in each game by location, gender, and ethnicity.
Table 1. Average behavior in each game by location, gender, and ethnicity.
Ultimatum Game: Envy and Self-interest
MiamiTucson
All MiamiMaleFemaleHispanic/LatinoNon-Hispanic/LatinoAll TucsonMaleFemaleHispanic/LatinoNon-Hispanic/Latino
Avg % Amount offered0.420.450.390.430.410.40.390.410.370.41
% of Offers Rejected0.070.090.050.040.120.110.150.070.120.09
Dictator Game: Altruism and Self-interest
MiamiTucson
All MiamiMaleFemaleHispanic/LatinoNon-Hispanic/LatinoAll TucsonMaleFemaleHispanic/LatinoNon-Hispanic/Latino
Avg % Amount Offered0.230.180.270.250.140.220.210.250.260.21
Trust Game: Trust, Trustworthiness, and Positive Reciprocity
MiamiTucson
All MiamiMaleFemaleHispanic/LatinoNon-Hispanic/LatinoAll TucsonMaleFemaleHispanic/LatinoNon-Hispanic/Latino
Avg % Amount Passed0.560.530.630.530.580.540.390.680.510.55
Avg % Amount that the Trustee Returns0.520.440.530.500.550.450.360.550.400.47
Table 2. T-tests of differences between locations, genders, and ethnicity.
Table 2. T-tests of differences between locations, genders, and ethnicity.
t-Test for Equality of Means
tdfSig. (2-Tailed)Mean DifferenceStd. Error Difference95% Confidence Interval of the Difference
LowerUpper
Ultimatum Average % Amount Offered
Gender0.734550.4660.029550.04023−0.051080.11017
Ethnicity−0.139550.890−0.005540.03992−0.085540.07446
Location0.403550.6880.016070.03987−0.063820.09597
Ultimatum % of Offers Rejected
Gender0.839550.4050.064390.07677−0.089450.21824
Ethnicity−1.447550.154−0.108370.07489−0.258450.04170
Location−0.420550.676−0.032020.07618−0.184680.12064
Dictator Avg % Amount Offered
Gender0.295550.7690.016290.05523−0.094400.12698
Ethnicity1.020550.3120.055170.05408−0.053210.16356
Location−0.239550.812−0.013050.05456−0.122400.09629
Trust—Avg % Amount Sent
Gender−2.312510.025−0.180350.07801−0.33697−0.02374
Ethnicity−0.538510.593−0.043430.08071−0.205460.11860
Location−0.209510.835−0.016950.08113−0.179840.14593
Trust—Avg % Amount Returned
Gender1.838510.0720.186360.10139−0.017180.38991
Ethnicity−0.098510.922−0.010140.10333−0.217580.19729
Location0.548510.5860.056610.10333−0.150830.26405
Table 3. Regression analysis.
Table 3. Regression analysis.
UltimatumDictatorTrust
Avg % Amount OfferedOffers
Rejected
Avg %
Amount Offered
Avg %
Amount Sent
Avg %
Amount Returned
Location0.033
(0.049)
0.050
(0.091)
−0.060
(0.066)
−0.027
(0.092)
0.120
(0.119)
Gender0.033
(0.041)
0.069
(0.077)
0.010 *
(0.056)
−0.185 *
(0.080)
0.203 *
(0.104)
Ethnicity−0.023
(0.048)
−0.134
(0.090)
0.088
0.065
−0.035
(0.091)
−0.065
(0.117)
Constant
0.389 ***
(0.036)
0.103
(0.068)
0.209 ***
(0.049)
0.657 ***
(0.068)
0.370 ***
(0.088)
R20.0180.0540.0350.1030.081
Observations5656565656
Note: (*) p < 0.05; (***) p < 0.001.
Table 4. Pooled sample robustness check.
Table 4. Pooled sample robustness check.
Dependent VariableLocation Main EffectLargest Interaction TermAdjusted R2
Ultimatum—% offeredβ = −0.012, p = 0.32Location × Gender: β = 0.018, p = 0.280.07
Dictator—% offeredβ = 0.004, p = 0.73Location × Gender: β = −0.021, p = 0.190.05
Trust—% sentβ = −0.015, p = 0.27Location × Stakes: β = 0.029, p = 0.140.08
Reciprocity—% returnedβ = −0.022, p = 0.21Location × Gender: β = 0.034, p = 0.110.06
Table 5. Covariate Balance Across Samples.
Table 5. Covariate Balance Across Samples.
CovariateRaw Mean Welfare (N = 56)Raw Mean Students (N = 58)SMD Raw
Male (1 = Yes)0.380.41−0.08
Hispanic (1 = Yes)0.620.220.81
Table 6. Key Coefficients from PSM and IPW Regressions.
Table 6. Key Coefficients from PSM and IPW Regressions.
GameLoc β PSMp PSMGender β PSM
Ultimatum−0.0150.30.052
Dictator0.0060.70.037
Trust−0.0180.280.06
Reciprocity−0.0250.240.072
Table 7. Benchmarking Welfare Recipients against Published Adult Samples.
Table 7. Benchmarking Welfare Recipients against Published Adult Samples.
Game MetricWelfare Recipients MeanAdult Mean (95% CI)
(Belot et al., 2015)
Adult Mean (95% CI)
(Staffiero et al., 2013)
Ultimatum—% Offered0.380.32 (0.28–0.36)0.35 (0.30–0.40)
Dictator—% Offered0.290.28 (0.24–0.31)0.27 (0.23–0.31)
Trust—% Sent0.370.35 (0.31–0.38)0.36 (0.32–0.40)
Reciprocity—% Returned0.350.34 (0.30–0.38)0.33 (0.29–0.37)
Table 8. Variance Inflation Factors (VIFs)—model diagnostics.
Table 8. Variance Inflation Factors (VIFs)—model diagnostics.
DiagnosticUltimatumDictatorTrustReciprocityBenchmark
Mean VIF (Location, Gender, Ethnicity, Stakes-ratio)1.181.211.231.19p < 5 indicates no multicollinearity (Greene, 2020)
Shapiro–Wilk p (normality of residuals)0.220.310.180.27p > 0.05 p ⇒ cannot reject normality
Breusch–Pagan p (heteroskedasticity)0.340.290.410.36p > 0.05 p > 0.05 ⇒ homoscedastic errors
Max Cook’s D0.120.10.090.11p < 1 ⇒ no influential outliers
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zumaeta, J.N. Social Preference Parameters Impacting Financial Decisions Among Welfare Recipients. J. Risk Financial Manag. 2025, 18, 408. https://doi.org/10.3390/jrfm18080408

AMA Style

Zumaeta JN. Social Preference Parameters Impacting Financial Decisions Among Welfare Recipients. Journal of Risk and Financial Management. 2025; 18(8):408. https://doi.org/10.3390/jrfm18080408

Chicago/Turabian Style

Zumaeta, Jorge N. 2025. "Social Preference Parameters Impacting Financial Decisions Among Welfare Recipients" Journal of Risk and Financial Management 18, no. 8: 408. https://doi.org/10.3390/jrfm18080408

APA Style

Zumaeta, J. N. (2025). Social Preference Parameters Impacting Financial Decisions Among Welfare Recipients. Journal of Risk and Financial Management, 18(8), 408. https://doi.org/10.3390/jrfm18080408

Article Metrics

Back to TopTop