Economic Attitudes and Financial Decisions Among Welfare Recipients: Considerations for Workforce Policy
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis paper explores decision-making patterns using four behavioral economic tasks (Guessing Game, Prudence Measurement Task, Risk Aversion Task, and Stag Hunt Game), comparing outcomes between welfare recipients in Miami and undergraduate students in Tucson. The topic is important and the methodology is grounded in well-known experimental economics. However, I have a major concern regarding the comparability of the two groups: welfare recipients and undergraduate students.
The two populations likely differ significantly in age, education, cognitive exposure, and life context, all of which can heavily influence how participants engage with economic tasks. The manuscript does not provide key demographic breakdowns (e.g., student majors, education level, or program participation for welfare recipients), making it difficult to separate experimental effects from baseline group differences.
Given the lack of controls or discussion around this structural mismatch, I find it challenging to interpret the results with confidence. There are several other areas that require clarification and improvement to enhance the overall quality and coherence of the manuscript.
Title and Abstract
The current title does not fully reflect the study’s content. While it emphasizes welfare recipients, the study also includes undergraduates as a key comparison group. Additionally, the terms "financial decision" and “workforce policy” are not clearly developed in the manuscript.
The abstract should mention both Miami and Tucson in the first sentence and provide a brief overview of the four tasks used. It currently reads more like a list of results rather than a summary of the study’s purpose, methods, and implications.
Motivation and Conceptual Framework
The motivation for selecting these four tasks and the relevance to welfare or workforce development policy needs to be more clearly stated. While the conclusion touches on issues such as cognitive ability and education, the introduction should better frame why these economic tasks are meaningful in the context of financial decision-making or policy design. Please provide stronger theoretical or applied reasoning for using these four tasks and offer more context on the welfare-to-work transition and explain why this is a meaningful population to study from a behavioral perspective.
Experimental Design and Transparency
There is insufficient information on how the experiments were conducted. For example:
- How were the tasks explained to participants?
- Were the instructions standardized?
- Was comprehension assessed before task completion or after?
This is especially important when working with welfare participants, who may have different levels of familiarity with abstract tasks compared to undergraduates. Suggestions: Add an appendix that includes the full instructions and questions used in each task.
Presentation of Results (Tables and Regressions)
Table 2 is difficult to interpret. It appears that each variable has a second line of values, likely standard errors, but this is not labeled. Reference groups are also not clearly stated, which makes interpretation of coefficients harder. Please use parentheses to show standard errors and include a clear note at the bottom of each table explaining all abbreviations, reference categories, and estimation methods. Also, please clearly state in the table titles whether OLS or another specific regression method was used.
Policy Implications – Need for Development
The mention of “workforce policy” in the title and later sections feels underdeveloped. If the paper intends to contribute to policy discussions, this section needs to be expanded and supported with relevant literature or examples.
Additional Recommendations
Clarity on “Financial Decision”: The term appears in the title, but it’s unclear what financial decisions were actually studied. If this refers only to behaviors in the four tasks, this should be clarified.
Author Response
Comment 1: The two populations likely differ significantly in age, education, cognitive exposure, and life context, all of which can heavily influence how participants engage with economic tasks. The manuscript does not provide key demographic breakdowns (e.g., student majors, education level, or program participation for welfare recipients), making it difficult to separate experimental effects from baseline group differences.
Response 1: We thank the reviewer for raising this important methodological concern regarding demographic controls and population comparability. Our research design deliberately employs a comparative framework that leverages the natural heterogeneity between our two distinct populations as a methodological strength rather than a limitation requiring extensive statistical controls.
Our study employs what we term a "maximum differentiation" experimental design, purposefully selecting populations that vary maximally across demographic characteristics while maintaining comparability in the specific economic decision-making mechanisms we seek to investigate. This approach follows established precedents in behavioral economics (see Henrich et al., 2010; Cardenas & Carpenter, 2008) where cross-population validation serves to isolate universal behavioral patterns from culturally or demographically specific phenomena.
We added the following verbiage to enhance clarity on our approach (Lines 219-233)
Comment 2: Title and Abstract - The current title does not fully reflect the study’s content. While it emphasizes welfare recipients, the study also includes undergraduates as a key comparison group. Additionally, the terms "financial decision" and “workforce policy” are not clearly developed in the manuscript.
Response 2: We thank the reviewer for this constructive feedback. The inclusion of college students as a control group follows established methodological conventions in experimental economics and does not warrant equal emphasis in the title. Most notably, Exadaktylos, Espín, and Brañas-Garza (2013) demonstrate through comprehensive meta-analysis that "experimental subjects are not different," finding no systematic behavioral differences between student and non-student populations across various experimental games. Their findings indicate that college students exhibit behavioral patterns statistically indistinguishable from representative population samples, making them appropriate control groups for comparative studies. In experimental economics, titles commonly foreground the focal population and note comparison groups only when they, too, are the object of inquiry.
Regarding the terminology "financial decisions," our experimental games directly measure cognitive and behavioral parameters that fundamentally shape financial behavior in real-world contexts. These experimental measures capture the foundational cognitive and behavioral constructs underlying financial decision-making, making our results directly relevant to understanding how welfare recipients approach financial choices in their daily lives and respond to financial incentives embedded in policy programs.
We added the following language to increase clarity (Lines 219-233):
The Guessing Game captures each participant’s capacity for strategic reasoning and iterative thinking, traits that underpin success in complex financial arenas—ranging from optimizing retirement portfolios and navigating mortgage options to assessing the hidden costs embedded in sophisticated financial products. Building on this, the Prudence Measurement Task gauges precautionary motives in the face of background risk, illuminating why some individuals systematically build emergency funds, manage debt conservatively, and craft flexible financial plans to buffer unforeseen shocks. Complementing these insights, the Risk-Aversion Task identifies how much uncertainty people are willing to tolerate, a parameter that directly shapes day-to-day choices about savings allocation, portfolio diversification, insurance coverage, and even the intensity and duration of job search activities. Finally, the Stag Hunt Game reveals preferences for coordination and mutual trust, shedding light on real-world behaviors such as joining rotating savings and credit associations, entering joint investment ventures, or depending on informal financial networks for credit and risk sharing. Together, the four tasks offer a multidimensional portrait of financial decision-making that spans strategic foresight, precautionary planning, risk tolerance, and cooperative finance.
We agree to remove "Workforce Policy" from the title and when mentioned in the text.
Comment 3: The abstract should mention both Miami and Tucson in the first sentence and provide a brief overview of the four tasks used. It currently reads more like a list of results rather than a summary of the study’s purpose, methods, and implications.
Response 3: We have reworded the beginning of the abstract to read
This study investigates economic decision-making behaviors among welfare recipients in Miami, Fl, and Tucson, AZ—using students as controls, through four incentivized tasks that elicit strategic reasoning (Guessing Game), prudence under background risk (Precautionary-Saving Task), tolerance for uncertainty (Risk-Lottery Task), and cooperative coordination (Stag-Hunt Game).
Comment 4: Motivation and Conceptual Framework -The motivation for selecting these four tasks and the relevance to welfare or workforce development policy needs to be more clearly stated. While the conclusion touches on issues such as cognitive ability and education, the introduction should better frame why these economic tasks are meaningful in the context of financial decision-making or policy design. Please provide stronger theoretical or applied reasoning for using these four tasks and offer more context on the welfare-to-work transition and explain why this is a meaningful population to study from a behavioral perspective.
Response 4: We added the following verbiage (Lines 206-231)
Motivation and Conceptual Framework
We selected these games because they provide a concise yet comprehensive diagnosis of the economic attitudes individuals use to make financial decisions that could be relevant to workforce development.
Guessing Game Strategic foresight & information processing - In welfare programmes, participants must forecast job-market responses (e.g., employer callbacks) and competitor behaviour (filling training slots). Lower‐level reasoning predicts myopic job-search spells and sub-optimal course selection. Thus, identifying the limited depth of reasoning helps tailor job-search coaching and labor-market information sessions.
Prudence Measurement Task - Precautionary saving under background risk
Precautionary motives (Kimball 1990) are critical when income is irregular. TANF recipients who score low on prudence often under-save and rely on high-cost credit, jeopardising continuous employment. Measuring prudence guides emergency-savings matches and liquidity-smoothing products now piloted by several state agencies.
Risk-Aversion Lottery Task - Occupational choice, insurance uptake & job-search intensity - Optimal job-search theory (Lippman & McCall 1976) predicts that higher risk aversion shortens search horizons, pushing workers into lower-quality jobs with higher turnover. It also increases demand for earnings-linked insurance (e.g., wage-supplement programmes). By quantifying this parameter, we inform differential job-search counselling and earnings-insurance design.
Stag-Hunt Coordination Game → Trust, teamwork & peer support - Welfare-to-work initiatives increasingly rely on team-based training and peer mentoring. The stag hunt captures the complementarity and multiplicity inherent in such coordination problems (Cooper 1999). Participants reluctant to move from the safe “hare” to the riskier “stag” mirror limited take-up of cooperative savings circles or group-based apprenticeships. Task scores help flag individuals for trust-building modules or graduated team incentives.
Comment 5: Experimental Design and Transparency
There is insufficient information on how the experiments were conducted. For example:
- How were the tasks explained to participants?
- Were the instructions standardized?
- Was comprehension assessed before task completion or after?
This is especially important when working with welfare participants, who may have different levels of familiarity with abstract tasks compared to undergraduates. Suggestions: Add an appendix that includes the full instructions and questions used in each task.
Response 5:
a. How were the tasks explained to participants?
Every session opened with a structured, 30 minute plenary briefing in which the research associate read the task instructions aloud, displayed the same text on handouts/computer screens, and answered clarifying questions. The instructions explicitly state that “The experimenter will read the instructions aloud and will gladly answer any questions that you may have.” Experiments - Intructio…. Immediately before each game, participants saw task-specific cue cards or Veconlab screens and completed an unincentivised practice round (also described in the instructions) so they could “understand the game” before real stakes were introduced. All briefings were offered in English and Spanish in Miami; welfare participants were free to choose either language.
b. Were the instructions standardized?
Yes. We used a single master script that was professionally translated and back-translated to ensure equivalence. Research associates were trained with a 2- hour protocol workshop and worked from a verbatim script. The on-screen/handout wording is identical across Miami and Tucson sessions, and the reminder that only the second (paid) round counts appears verbatim in every game’s instructions Experiments - Intructio…
c. Was comprehension assessed before or after task completion?
Comprehension was verified before incentivised play in two ways. (i) The practice round for each game was followed by questions back to participants to determine understanding. (ii) Any lingering questions were resolved in a Q&A before the first paid decision. A short oral debrief after the session served purely for ethical transparency and did not influence pay-offs.
d. Transparency material added. Per your suggestion, Appendix A (Instructions) - Lines 666-676
Comment 6: Presentation of Results (Tables and Regressions) - Table 2 is difficult to interpret. It appears that each variable has a second line of values, likely standard errors, but this is not labeled. Reference groups are also not clearly stated, which makes interpretation of coefficients harder. Please use parentheses to show standard errors and include a clear note at the bottom of each table explaining all abbreviations, reference categories, and estimation methods. Also, please clearly state in the table titles whether OLS or another specific regression method was used.
Response 6: The Table was replaced with additional formatting, and note was expanded to accommodate for reviewer's requests. Lines 306-311.
Table 2. Estimates of Behavioural-Task Scores
Guessing Game |
Prudence Task |
Risk-Aversion Task |
Stag-Hunt Game |
|
Location (Miami = 1) |
8.561 † (4.744) |
−0.010 (0.414) |
−0.164 (0.371) |
−0.997 † (0.517) |
Gender (Female = 1) |
−4.281 (4.379) |
−0.315( 0.382) |
0.340 (0.343) |
−0.120 (0.466) |
Ethnicity (Minority = 1) |
−11.355 **(4.791) |
0.335 (0.420) |
0.297 (0.375) |
0.517 (0.513) |
Constant |
36.521 ***(3.803) |
0.141 (0.414) |
5.327 ***(0.298) |
1.651 ***(0.436) |
R² |
0.065 |
0.014 |
0.015 |
0.034 |
Observations |
114 |
114 |
114 |
114 |
Notes. Significance is flagged with the conventional asterisks: † p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001. and standard errors clustered by experimental session in parentheses. The Guessing-Game score (0-100) and the count of safe lottery choices (0-10) are analysed with OLS, whereas the binary outcomes from the Prudence Task and Stag-Hunt Game are estimated with Probit, reported as average marginal effects.
Comment 7: Policy Implications – Need for Development - The mention of “workforce policy” in the title and later sections feels underdeveloped. If the paper intends to contribute to policy discussions, this section needs to be expanded and supported with relevant literature or examples.
Response 7: We thank the reviewer for emphasizing the importance of clearly connecting our experimental tasks to workforce policy design. We agree this aspect requires further development. While our initial framing of policy implications may have been overly ambitious, leading us to reconsider its inclusion, we have nonetheless prepared a response to address your valuable suggestions:
We have expanded the policy implications discussion by explicitly highlighting how each of the four behavioral tasks used in our study corresponds to specific skills targeted by welfare-to-work interventions.
- The Guessing Game measures strategic reasoning, which is crucial in labor-market contexts requiring participants to anticipate actions by employers or peers and to iteratively update beliefs—skills closely associated with effective negotiation, planning, and career advancement. Drawing on behavioral game theory (Camerer & Ho, 2015), we argue that deviations from equilibrium in this task indicate cognitive constraints or reliance on heuristics, providing a diagnostic measure that workforce-development programs can use to enhance participants' strategic thinking.
- The Prudence Measurement Task captures participants' tendencies toward precautionary saving behavior—deferring immediate consumption to ensure future security—which underlies critical financial behaviors like emergency-fund accumulation and investment in human capital. Measuring prudence offers insights into financial resilience, a key outcome for individuals transitioning off welfare and into stable employment.
- The Risk-Aversion Task assesses tolerance for uncertainty, an essential characteristic in decisions involving job mobility, such as changing sectors or accepting performance-based compensation. Empirical evidence links performance in this task to real-world risk behaviors, including occupational choices and investment decisions, underscoring its relevance for tailoring program supports like vocational counseling and wage guarantees.
- Lastly, the Stag Hunt Game evaluates coordination and mutual trust, which are fundamental for successful team integration in training programs and workplaces. As theoretical analyses of the Stag Hunt illustrate, achieving Pareto-efficient outcomes depends upon institutional support, social capital, and partner reliability—precisely the factors workforce-development interventions seek to foster.
We conclude our revised introduction by emphasizing that these four dimensions—strategic reasoning, prudence, risk tolerance, and coordination—map directly onto actionable policy levers. For instance, strategic-thinking modules common in TANF programs closely mirror the belief-updating demands captured by the Guessing Game, while financial-literacy curricula aim explicitly to strengthen prudence and risk-management capacities. This framing demonstrates concretely how our laboratory-based measures can guide the design and evaluation of policies aimed at improving employment outcomes, income stability, and long-term economic self-sufficiency.
I could add the following table and subsequent analysis if the reviewer finds it appropriate.
Experimental construct |
Behavioral bottleneck in the TANF pathway |
Illustrative policy lever |
Strategic reasoning (Guessing Game) |
Inefficient job-search strategies, under-preparation for interviews |
Cognitive “if-then” planning modules; behavioral coaching |
Prudence |
Low emergency-savings buffers leading to program churn |
Auto-deposit “rainy-day” sub-accounts; prize-linked savings |
Risk aversion |
Aversion to performance-based or commission jobs |
Wage-insurance pilots; phased-in earnings disregards |
Coordination/trust (Stag Hunt) |
Drop-out from team-based training and apprenticeships |
Group goal-setting, peer mentors, and cooperative learning tasks |
Comment 8: Additional Recommendations - Clarity on “Financial Decision”: The term appears in the title, but it’s unclear what financial decisions were studied. If this refers only to behaviors in the four tasks, this should be clarified.
Response 8:
We appreciate the reviewer’s request for greater clarity regarding the term "financial decision," as used in the manuscript title. To address this, we clarify that the phrase "financial decisions" refers specifically to the behavioral preferences and decision-making patterns elicited by the four experimental tasks—Guessing Game, Prudence Measurement Task, Risk-Aversion Task, and Stag-Hunt Game. Each task is theoretically and empirically grounded in the behavioral economics literature and has been validated as a proxy for specific real-world financial behaviors such as savings accumulation, precautionary liquidity holdings, debt management, occupational choices, wage negotiation strategies, and participation in cooperative financial networks. Although our study does not directly measure participants’ financial transactions or account balances, the tasks provide reliable and validated behavioral parameters predictive of critical financial outcomes relevant to workforce and welfare-to-work policy contexts. To ensure transparency, we have explicitly stated this conceptual definition in the abstract and Experimental Procedures sections of our revised manuscript.
Abstract (lines 10-12):
“…Our study defines financial decisions as the underlying individual preferences that serve as validated proxies for savings behaviour, debt management, job-search intensity, and participation in cooperative finance.”
Experimental Procedures (Lines 237-242):
In this paper, the term financial decisions specifically denote the behavioral preferences and decision-making processes elicited through our four incentivized experimental tasks. Although these tasks do not directly measure actual financial transactions, prior literature has robustly validated their role as proxies for critical real-world financial behaviors. Thus, we utilize these tasks as empirically grounded indicators of financial decision-making relevant to welfare-to-work and workforce-development contexts.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe article "Economic Attitudes and Financial Decisions among Welfare Recipients: Considerations for Workforce Policy" investigates economic decision-making behaviors among welfare recipients in Miami, Florida, compared to undergraduate students at the University of Arizona. The study is good, however, there are certain improvements that are needed before making this paper available for publication:
- The study’s regression models reveal low R² values and high mean squared errors, indicating that location, gender, and ethnicity explain little variance in outcomes. This suggests unaccounted individual factors, weakening causal claims.
- The voluntary participation introduces potential selection bias, inadequately addressed in the methodology.
- The comparison between Miami welfare recipients and Tucson students overlooks contextual socioeconomic differences, limiting generalizability.
- The correlational approach restricts causal inference.
- The lack of longitudinal data fails to clarify whether observed behaviors stem from welfare dependency or pre-existing traits.
- Currently, there are two discussion section. This might e a typographical mistake and need to be corrected.
- The academic contributions of the study is not clear.
Author Response
Comment 1: The study’s regression models reveal low R² values and high mean squared errors, indicating that location, gender, and ethnicity explain little variance in outcomes. This suggests unaccounted individual factors, weakening causal claims.
Respond1: Thank you for this insightful comment. We agree that the low R² values and relatively high mean squared errors in our regression models indicate that location, gender, and ethnicity account for only a modest proportion of the variance in behavioral outcomes. This finding is consistent with a large body of research in experimental and behavioral economics, where individual heterogeneity—including cognitive ability, psychological traits, prior experience, and unobserved context—often plays a dominant role in explaining financial decision-making. Our results reinforce the importance of looking beyond standard demographic predictors and highlight the need for future research to incorporate a richer set of individual-level variables, such as cognitive measures, time preferences, or social network characteristics. While we interpret our regression estimates as indicative of associations rather than strict causal effects, we view the limited explanatory power of demographic variables as an opportunity to encourage more nuanced modeling of behavioral drivers in workforce policy and financial decision research. These limitations are explicit in the Introduction (lines 78-79) and Discussion (Lines 554-556). We must add that these exercises suggest that unobserved individual factors drive most residual variance, but do not overturn the behavioral differences documented between welfare recipients and students. We encourage future work to incorporate richer psychological and social-network measures that may account for the remaining heterogeneity.
Comment 2: The voluntary participation introduces potential selection bias, inadequately addressed in the methodology.
Response 2: Thank you for raising the important point about possible selection bias from voluntary participation. We acknowledge this limitation in the manuscript and address it using details from our study procedures and the published literature.
We added the following text to the Experimental Procedures section (Lines 243-253):
To minimize potential selection bias, recruitment in Miami was conducted directly in partnership with TANF case managers during mandatory orientation and training sessions, ensuring that all eligible welfare recipients present had the opportunity to participate. This approach reduces reliance on passive or opt-in recruitment strategies and helps capture the diversity of the active welfare-to-work population. We also compared the demographic characteristics of our study sample to the broader eligible population and found no significant differences in age, gender, or ethnicity, supporting sample representativeness. While some degree of selection bias is unavoidable in any voluntary study, our recruitment design, coupled with guaranteed participation payments and bilingual accessibility, aligns with best practices in experimental and field economics for external validity (Harrison & List, 2004; Charness, Gneezy & Imas, 2013).
Comment 3: The comparison between Miami welfare recipients and Tucson students overlooks contextual socioeconomic differences, limiting generalizability.
Response 3: Our study was designed to probe whether well-established findings from student-based experimental economics extend to a population with markedly different socioeconomic profiles—namely, welfare recipients actively engaged in workforce development programs. To minimize bias and support a meaningful comparison, we implemented parallel experimental protocols in both sites, standardized all instructions and procedures, and used bilingual materials and experimenters in Miami to ensure accessibility across linguistic groups. The recruitment of welfare recipients occurred during mandatory orientation and training sessions in partnership with TANF case managers, while the student sample was drawn from a standard laboratory pool at the University of Arizona. This approach, as outlined in the attached manuscript, helps to mitigate recruitment-based selection effects and maintains internal validity within each group.
Importantly, as noted in the Introduction, we do not claim that our results can be straightforwardly generalized across all contexts. Instead, the purpose of the comparison is to rigorously test the external validity of behavioral economics protocols and to illuminate both convergences and divergences in economic attitudes between demographically and contextually distinct groups. Our findings demonstrate that, while broad demographic categories such as location, gender, and ethnicity account for limited variance, significant individual heterogeneity remains—a point we underscore as a limitation and a motivation for further research. We therefore interpret our results as evidence of the complexity of generalizing from student samples to broader populations, as well as a contribution to understanding how context may shape financial and strategic decision-making.
Comment 4: The correlational approach restricts causal inference.
Response 4: Thank you for highlighting the inherent limitation of causal inference in studies employing a correlational approach. We acknowledge this point directly in both the Introduction and Discussion sections of the manuscript. As detailed in the attached paper, our analysis is designed to document associations between behavioral preferences and demographic characteristics across two distinct groups—Miami welfare recipients and Tucson students—using validated experimental protocols. We explicitly state that, due to the non-randomized, observational nature of the study and the absence of experimental manipulation of treatment conditions, our findings should be interpreted as evidence of correlation, not causation. This transparency is maintained throughout the results and discussion, where we caution readers against drawing causal conclusions from our data. Our aim is to contribute to the understanding of how economic attitudes and decision-making may differ across diverse populations, while respecting the methodological boundaries imposed by correlational analysis. Our methodological choices reflect best practices in behavioral economics for drawing reliable associations, while maintaining transparency about the inherent constraints of correlational analysis.
Comment 5: The lack of longitudinal data fails to clarify whether observed behaviors stem from welfare dependency or pre-existing traits.
Response 5: We agree that the cross-sectional design of our study does not allow us to disentangle whether the behaviors we observe among welfare recipients are a consequence of welfare dependency or instead reflect pre-existing individual traits. As discussed in the manuscript, particularly in the Introduction and Limitations sections, our study provides a snapshot comparison between Miami welfare recipients and Tucson students at a single point in time using validated experimental protocols. Without longitudinal or panel data, it is not possible to establish the temporal ordering or direction of causality between welfare program participation and economic attitudes or preferences. We highlight this limitation in the paper and advise readers to interpret our findings as descriptive associations. We also note that future research employing longitudinal designs would be valuable for clarifying the dynamics between welfare experiences and behavioral development over time.
Comment 6: Currently, there are two discussion section. This might e a typographical mistake and need to be corrected.
Response 6: The error was corrected. There is now a single, clearly defined Conclusion section followed by a distinct Discussion.
Comment 7: The academic contributions of the study are not clear.
Response 7: Thank you for this valuable feedback regarding the clarity of the study’s academic contributions.The study makes several key contributions to the literature on behavioral and experimental economics as well as workforce development policy.
- First, our research is among the few to systematically apply well-established experimental protocols to a welfare-to-work population. While most experimental economics studies rely on undergraduate samples, our study directly addresses longstanding concerns about external validity by including welfare recipients engaged in government-supported employment programs. This provides unique evidence on whether behavioral patterns observed among students generalize to populations with markedly different socioeconomic backgrounds, as emphasized in the Introduction and cited in Harrison & List (2004) and Cappelen et al. (2015).
- Second, the study empirically demonstrates that broad demographic variables like gender, ethnicity, and location explain only a small portion of the observed variance in key behavioral preferences. This finding, documented in our regression models with consistently low R² values, reinforces the complexity and individual specificity of economic decision-making. By doing so, we advance the understanding that policy interventions should not be guided solely by demographic characteristics but should account for individual heterogeneity in preferences and attitudes—a point increasingly highlighted in contemporary behavioral economics (Benjamin et al., 2013; Falk et al., 2018).
- Third, we clarify the conceptual link between experimental task performance and real-world financial behaviors relevant to workforce development, such as savings, debt management, job search intensity, and cooperation. By operationalizing “financial decisions” through validated behavioral measures, our study offers a framework for integrating behavioral diagnostics into welfare-to-work programs and for tailoring interventions to participant profiles.
- Lastly, the study adds methodological transparency through its detailed reporting of recruitment, bilingual accessibility, incentive structures, and session protocols. This strengthens the replicability and credibility of the findings, serving as a model for future field-based experiments involving vulnerable or hard-to-reach populations.
In summary, our study makes original contributions by broadening the empirical base of behavioral economics beyond student samples, challenging the predictive value of demographics alone, mapping laboratory measures onto actionable policy domains, and advancing rigorous field methods. We have revised the abstract, introduction, and discussion sections to make these contributions more explicit for readers and reviewers.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsI have no additional comments at this time.