Next Article in Journal
Physiological State Recognition via HRV and Fractal Analysis Using AI and Unsupervised Clustering
Previous Article in Journal
Modelling the Behavioural Side of Textile Waste Collection: From Individual Habits to Systemic Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diagnosing Structural Change in Digital Interventions: A Configurational Evaluation Framework

1
Banyan Academy of Leadership in Mental Health, Chennai 600037, India
2
Sattva Consulting, Bengaluru 560071, India
3
AdagioVR, Gurgaon 122003, India
*
Author to whom correspondence should be addressed.
Information 2025, 16(9), 714; https://doi.org/10.3390/info16090714
Submission received: 9 July 2025 / Revised: 17 August 2025 / Accepted: 20 August 2025 / Published: 22 August 2025

Abstract

Digital interventions are widely promoted as levers of institutional change, yet their effects often prove fragile. We examine why some interventions persist while others fade. Using crisp-set Qualitative Comparative Analysis (csQCA) on 13 large-scale cases from India and abroad, we identify the configurations of conditions under which digital systems become self-sustaining. We conceptualise persistence as a shift in the Nash equilibrium: when incentives realign, the new behaviour maintains itself without continuing external push. The analysis shows that software openness is neither necessary nor sufficient for durable change. Instead, six non-technological conditions—regulatory enablement, a credible revenue model, substantial scale, a clearly targeted systemic barrier, presence of enabling prerequisites, and sufficient time—are each necessary and, in combination, sufficient for an equilibrium shift; no single condition is enough on its own. Successful cases (e.g., Aadhaar, UPI, Chalo, Swiggy) meet these conditions in combination, whereas others (e.g., ONDC, DIKSHA, ICDS-CAS) illustrate how missing elements limit institutional embedding. The paper contributes a theory-informed diagnostic that links game-theoretic stability to configurational evaluation and provides practical “if–then” decision rules for appraisal. We argue that policy and investment decisions should prioritise incentive-compatible ecosystems over software attributes, and judge success by whether interventions reconfigure the rules of the game rather than by short-term uptake. This perspective clarifies when digital systems can contribute to sustainable, inclusive institutional transformation.

1. Introduction

Low- and middle-income countries (LMICs) in Asia and Africa are grappling with how to stimulate their economies and provide essential services to their citizens, while addressing the challenges of inequality. In this context, the significant increase in mobile phone ownership and usage across much of Asia and Africa, supported by dense networks of telecommunication towers and optical fibre, has been viewed with great optimism. It offers the possibility that technology, if utilised effectively, could provide a potential solution to the economic and service challenges faced by these regions. One way to achieve this would be to utilise these technologies to develop lower-cost and more accessible solutions. However, an even more powerful and complementary approach might be to harness these technologies to transform the very nature of business and service provision by reducing fundamental barriers, such as high transaction costs, mistrust, and information gaps. We refer to this approach as shifting the Nash. A Nash equilibrium [1] is one in which no actor has any incentive to change their strategy given what the other actors are doing. One can shift the Nash by altering the structure of incentives so that, while each actor continues to act in their own best interest, the resulting equilibrium is superior to the previous one.
While Nash [1] originally formalised equilibrium in the context of static, one-shot games, subsequent work has extended the concept to evolutionary and institutional settings. Smith and Price [2] introduced the idea of an evolutionarily stable strategy (ESS), while Taylor and Jonker [3] modelled dynamic adjustment processes such as replicator dynamics. More recently, applications of evolutionary game theory in domains of public governance, including e-commerce regulation [4], decision-making in mega projects [5], and elderly healthcare partnerships [6], demonstrate that Nash-derived stability concepts can illuminate how interventions reshape incentives in complex social systems. Our use of the phrase, shifting the Nash, follows this tradition, treating it as a heuristic for durable institutional stability rather than a purely static abstraction.
The construction of physical public infrastructures (PPIs) reduces transaction costs and facilitates market entry and cooperation, as evident in rural road networks. It expands production possibilities, as electricity grids have done, and facilitates the building of trust and collaboration by allowing for repeated interactions, as demonstrated in public transportation systems. It can thus be seen that well-designed and well-placed PPIs can shift the Nash. The availability of these new technologies presents the possibility that large-scale digital interventions, which can also shift the Nash [7], could be established at a significantly lower cost and more quickly than PPIs. Driven by this belief, many Asian and African nations have embarked on a journey to invest in designing and implementing digital interventions to expand access to services, improve transparency, and catalyse institutional change. However, their contributions to development outcomes, whether social, economic, or institutional, are uneven and often short-lived. While existing studies have explored digital inclusion, adoption rates, and user experience, there is a limited understanding of the specific institutional configurations under which digital tools generate structural transformation.
This paper addresses this gap by introducing a game-theoretic framework rooted in Nash equilibrium logic to examine when digital interventions shift the underlying behaviour of actors in a system. Rather than evaluating isolated outputs (e.g., number of users or transactions), we ask: Under what institutional and contextual conditions do digital interventions in LMICs lead to durable systemic change that contributes to development outcomes? Where much of the Information Technology for Development (ITD) literature evaluates process outcomes or user interfaces, we shift the evaluative focus to structural incentives and their self-reinforcing stability.
We draw on the ITD literature to position digital interventions not just as technologies, but as policy instruments capable of altering incentives, reducing coordination failures, and unlocking inclusive development. The field of ITD has evolved from a narrow focus on technical deployment to a more nuanced understanding of institutional transformation, sociotechnical systems, and long-term sustainability. Foundational studies have shaped the theoretical landscape, emphasising the importance of development discourses [8], sociotechnical integration [9], and the amplifying (not substituting) role of technology [10]. Walsham [11] offers a reflective overview of how ITD scholarship has evolved, highlighting ongoing challenges related to equity, context sensitivity, and structural change.
Building on these foundations, more recent scholarship in ITD has shifted its focus toward governance architectures, sustainability, and the political economy of platform implementation. Some interrogate the inclusion-related challenges associated with digital identity systems, such as Aadhaar [12,13], while Veeraraghavan [14] introduces the concept of “governance by patching” (of software) used by senior bureaucrats to control the behaviour of front-line government functionaries. Several authors provide empirical evidence on the social and contextual barriers to digital inclusion through e-government and suggest ways to sustainably address them [15,16,17]. Zelenkov and Lashkevish [18] empirically demonstrate how ITD affects human development outcomes, reinforcing the importance of structural conditions in digital transformation.
Our study builds on that body of work by introducing a game-theoretic perspective—specifically, the concept of a Nash equilibrium—to evaluate whether a digital intervention creates a new, self-reinforcing configuration of incentives and roles. Using crisp-set Qualitative Comparative Analysis (csQCA) [19,20,21,22,23], we identify the specific combinations of conditions under which such structural transformations occur. Our work responds to the rich literature discussed above, which finds enormous heterogeneity and significant gaps in the development outcomes associated with large-scale digital interventions, but does not provide a systematic analysis of the mechanisms that may be driving these differences. While sociotechnical systems theory emphasises mediation between users and systems, our use of Nash equilibrium foregrounds the stability of changed incentives across all actors.
While much of the ITD literature has focused on adoption, usability, or access, this study reframes large-scale digital interventions as mechanisms of equilibrium change. Drawing from game theory, it complements sociotechnical analyses [9] by emphasising how incentive structures, not just tools, shape lasting change. This approach resonates with but also extends critiques in the data justice literature [13] and infrastructure governance [14], offering a systematic framework for anticipating when digital interventions will be self-reinforcing rather than ephemeral. Our approach provides practical utility for implementers and investors seeking to design digital systems that are not only accessible but also institutionally sustainable.
Beyond their immediate institutional impact, digital public interventions should also be understood within the broader trajectory of sustainable development. As argued by Klarin [24], the concept of sustainability has evolved from its early environmental focus to encompass institutional, economic, and social dimensions. Manioudis and Meramveliotakis [25] further emphasise that sustainable development requires a political economy foundation, where public policies create enabling conditions for long-term systemic change. Framing digital interventions as potential shifts in the Nash equilibrium underscores their contribution to sustainability, as they alter incentive structures in ways that can make new, more inclusive practices self-reinforcing.

2. Methods

QCA is an approach based on Boolean Algebra, which allows researchers to examine a relatively small number of cases (or complex interventions) to ascertain the combinations of factors that led to the success or failure [22]. QCA is particularly suited to small-N studies seeking to identify necessary and sufficient conditions for complex outcomes [20,22]. Rather than estimating average effects, it focuses on the combinations of factors (configurations) that produce an outcome. Hanckel and colleagues [26] in their systematic review point out that QCA methods, which have been developed for use with small and medium sample sizes (typically ranging from 10 to 50 cases), have been “widely used in public health research” and have “advantages over probabilistic statistical techniques for examining causation where systems and/or interventions are complex”.
In operationalising QCA for this study, a key step involved calibrating qualitative information into binary conditions. As Ragin [23] highlights, this act of “quantifying the qualitative” is not an optional add-on but rather central to the method’s comparative logic. Our calibration process combined structured coding rules with team consensus, ensuring consistency across cases. This form of consensus-based calibration has been widely used in organisational and public policy applications [19,26,27]. While simplification is unavoidable, such transparency and comparability are precisely what allow QCA to uncover causal configurations rather than isolated correlations.
In this study, csQCA allows us to map digital interventions into binary-coded attribute sets and identify patterns that lead to self-sustaining systemic change. While subjectivity in binary coding is acknowledged, we mitigate this by triangulating expert judgement, public documentation, and supplemental case material for each intervention. Our coding logic is transparent, replicable, and theoretically grounded. See Supplementary Materials for details of each case and for case-level calibration details.
In this paper, we follow the six-step approach towards QCA outlined by Greckhamer [22] in the SAGE Handbook of Survey Development and Application.
  • Articulate research question(s).
  • Construct a purposeful sample of cases and collect data regarding outcome and key case attributes expected to be causally linked to outcome.
  • Transform the dataset into crisp or fuzzy sets by defining sets and calibrating case membership in sets.
  • Construct a Truth Table.
  • Analyse the Truth Table to identify set relations between attributes and outcome(s).
  • Evaluate, interpret, and represent findings as well as their robustness.
The first two steps are included here in the methods section, the next two in the results section, and the final two in the discussion section.

2.1. Research Question

Determining the set of factors associated with digital interventions that have successfully shifted the Nash in the regions where they were implemented is the core research question of this study. Shifting the Nash in this study is interpreted to mean sustained and permanent improvement in the well-being of a population directly attributable to the digital intervention.

2.2. Purposive Sampling of Cases with Outcomes and Key Attributes

For our analysis, we identified multiple large-scale digital interventions, regardless of their ownership or funding patterns. These were selected from various sources, including the registry maintained by the Digital Public Goods Alliance [28], e-commerce platforms, and specialised platforms that operate in sectors such as health, education, transportation, and finance. The availability of data on these digital interventions was a crucial selection factor. The authors collectively chose a mix of digital interventions developed and implemented internationally, as well as those created in India, featuring sectoral diversity, varying scales of implementation, and, importantly, the availability of information for analysis. Table 1 lists the thirteen digital interventions that emerged from this selection process.
Data for each intervention were gathered from multiple sources, including publicly available reports, data shared by the intervention teams, discussions with experts, and the authors’ own knowledge regarding each intervention. Wherever formal data sources were consulted, they have been referenced in the description of the intervention. Reliance on other informal sources limits the full replicability of this study; however, this is not a limitation that can be easily overcome, given the scarcity of data available in the public domain. Another significant limitation is the survivorship bias, i.e., except for the case of ICDS-CAS, all the other cases continue to exist for one reason or another. They could, therefore, automatically satisfy one or more of the attributes. Examining a much larger list of digital initiatives, including those that have since ceased to exist, would be a significantly more extensive exercise—beyond the scope of this paper.
Where available, we also drew on basic quantitative indicators such as adoption, usage, or coverage rates (for example, UPI transaction volumes, DIKSHA enrolment numbers, and DHIS2 country coverage). These figures provided useful context but were not decisive in the crisp-set QCA calibration, which followed the logic of necessary and sufficient conditions. Although the depth of available evidence varied across cases, each was coded consistently against the same set of conditions, ensuring comparability of results.
Each digital intervention identified in the paper is discussed in detail in the Supplementary Materials, and each one’s ability to (a) remove systemic and structural barriers, (b) ensure sustained well-being for all, (c) add value in a specific domain, and (d) shift the Nash is explored under the following headings:
  • Description: A concise overview of the digital intervention, outlining its purpose, institutional ownership (public, private, or donor-backed), technical architecture (e.g., open source), geographic scope, and primary functionalities. This section typically answers the following questions: What is the intervention? Who created it? What is its intended purpose?
  • Removal of Systemic and Structural Barriers: An assessment of how effectively the intervention addresses deep-rooted inefficiencies, exclusions, or frictions in existing systems. These barriers might include fragmented data systems, a lack of transparency or accountability, geographical, linguistic, economic, or technological access constraints, and poor information flows. This section also critically evaluates whether the right barrier was identified and targeted.
  • Sustained Well-being for All: A measure of whether the intervention leads to long-term, inclusive, and equitable improvements in well-being. This includes evidence of large-scale and enduring benefits, inclusivity across gender, geography, class, and ability, as well as integration into public systems for continuity. It differentiates between the scale of use and the depth or durability of impact.
  • Value Added: The distinct contributions made by the intervention that would not have existed otherwise. This can encompass improvements in service delivery efficiency or reach; the generation of actionable data or insights; enhanced decision-making or cost savings; and tangible outcomes in the target domains (e.g., health, education, governance).
  • Ability to Shift the Nash Equilibrium: The intervention’s potential to create a self-sustaining, system-wide behavioural change where all actors (e.g., users, providers, regulators) have rational incentives to adopt and continue using it. A shifted Nash implies that legacy systems are retired or replaced; behavioural norms and institutional practices have permanently altered; and there is clear local ownership and operational continuity. An intervention that merely coexists with existing practices or is dependent on external funding fails this test.
  • Conclusion: A summative evaluation that synthesises findings from all prior sections and assesses the intervention’s long-term viability; ownership, financing, and policy alignment; whether it meets the necessary conditions to shift the Nash; and whether the intervention adds systemic value and is likely to endure once external support is withdrawn.
The conclusion for each intervention examines how many of the following seven attributes were present:
  • Open-Source Code (Yes = 1; No = 0): Whether the digital intervention’s codebase is publicly accessible and can be freely used, modified, and distributed. This attribute indicates whether the technology is open source (Yes = 1) or proprietary (No = 0), based on statements regarding licensing, public access, or open infrastructure.
  • Regulatory Enablement (Yes = 1; No = 0): Whether a supportive policy or regulatory framework enables the success or operation of the intervention. This includes mandates that enforce adoption, legal provisions that create use-cases (e.g., e-signatures, health records), and government endorsement or integration. A value of 1 implies clear regulatory backing, while a value of 0 indicates the absence of such support.
  • Revenue Model in Place (Yes = 1; No = 0): Whether the intervention has an identifiable and sustainable financial model. This includes public sector budgetary allocations, private sector monetisation strategies, and donor or philanthropic support with plans for long-term funding. A value of 1 indicates that a credible model is described; 0 indicates that funding is ad hoc or uncertain.
  • Substantial Scale Achieved (Yes = 1; No = 0): Whether the intervention has reached a large number of users or geographic spread in a sustained way. Scale may be indicated by nationwide rollout, millions of users, or integration into routine government operations. A value of 1 indicates that the scale has been clearly demonstrated; 0 implies pilots, small rollouts, or stagnant reach.
  • Identifiable Systemic Barrier it Seeks to Eliminate (Yes = 1; No = 0): Whether the intervention targets a clearly articulated and systemic structural barrier, such as data fragmentation, lack of portability, absence of real-time information, or institutional inefficiency. A value of 1 implies that the intervention is specifically designed to eliminate this barrier.
  • Presence of Pre-requisites (Yes = 1; No = 0): Whether the successful implementation of the intervention depends on external pre-conditions already being in place. These could include smartphone penetration, the availability of a trained workforce, digital literacy, and internet connectivity. A value of 1 indicates that such enabling factors are present; a value of 0 indicates that critical dependencies are absent.
  • Sufficient Time for Implementation (Yes = 1; No = 0): Whether the intervention has been in operation long enough to allow for full deployment, feedback integration, and measurable impact. A value of 1 implies multiple years of implementation (typically five or more years); a value of 0 means the intervention is too new or still evolving.
The outcome measure was whether the intervention had shifted the Nash equilibrium (Yes = 1; No = 0). This parameter captured whether the intervention had successfully shifted the underlying Nash equilibrium of the system, i.e., whether all actors (users, providers, institutions) now have a stable, self-reinforcing incentive to use the new system, making reversion to the previous state irrational. This outcome measures systemic transformation, not just uptake or effectiveness—it captures whether the digital intervention has changed the game for good.
A value of 1 (Yes) implies that the intervention has replaced or superseded legacy systems or informal workarounds, the new system is institutionally embedded and locally owned, stakeholders (including frontline workers, administrators, and end-users) are now behaving differently due to the incentives and norms created by the system, and the change is sustainable without external push (e.g., donor support or government mandates alone). A value of 0 (No) indicates that the system coexists with legacy arrangements or remains underutilised. In this case, stakeholders revert to old behaviours when incentives lapse, the intervention lacks ownership, stable financing, or regulatory integration, and the intervention is perceived as a pilot, donor-driven, or non-essential.
In this study, we distinguish between three related but distinct concepts:
  • Well-being refers to improvements in individual or collective outcomes (e.g., health, income, education). These are the goals of development, but they may arise from short-term programs or temporary boosts in service access.
  • Structural change refers to the reconfiguration of formal and informal rules, roles, or institutional capacities that shape how services are delivered or regulated. Structural changes may be imposed top-down or may emerge gradually and may not always persist.
  • Shifting the Nash equilibrium refers to a specific subset of structural change—a self-reinforcing reconfiguration of actor incentives. In game-theoretic terms, a new equilibrium exists when no actor (e.g., user, provider, regulator) has an incentive to revert to prior behaviours, given what others are doing. This implies local ownership, institutional embedding, and ongoing functionality without external enforcement.
Thus, an intervention may improve well-being or enact structural change without shifting the equilibrium if it relies on coercion, subsidy, or novelty. By contrast, an equilibrium shift marks a durable transformation in how a system behaves.
The descriptions for each intervention and the entries in the attribute table were developed collaboratively by all three authors, who arrived at a shared consensus (see Supplementary Materials). The authors’ use of judgement instead of formal quantitative parameters to code the attribute table may raise concerns regarding generalisability. While judgement has indeed been applied, it is essential to note that the rationale for our coding is thoroughly documented in the Supplementary Materials. The expectation in QCA is for transparent and theory-guided calibration, rather than an exclusive reliance on quantification.
A glossary of acronyms and key terms is provided in the Supplementary Materials.

3. Results

3.1. Classify the Data into Crisp Sets

While a dataset of 13 interventions limits statistical generalisation, as discussed above, QCA is designed precisely for such small-N conditions. The objective is not predictive inference, but the identification of causal patterns through logical sufficiency and necessity. In QCA, a crisp set refers to a type of set membership in which each case is assigned a binary value—either fully in (1) or fully out (0) of a set. Crisp set QCA (csQCA) is the original form of QCA introduced by Charles Ragin and is often used when the number of cases is low, as in this study, and it is possible to dichotomise attributes [23]. Table 2 presents the crisp sets into which all the cases in this study were classified.

3.2. Construct a Truth Table

In QCA, a Truth Table is a structured tabular representation that lists all the logical possible combinations of attributes and indicates what outcome each configuration is associated with [23]. Given that we have seven attributes, each with a possible of two outcomes (zero or one), the total number of possible configurations is 27 = 128. However, it is evident from Table 2 that only seven combinations have associated cases. The seven combinations and their associated outcomes are presented in Table 3.

4. Discussion

4.1. Analyse the Truth Table

The most important finding from the Truth Table (Table 3) is that whether the code used in the digital intervention is open-source or not appears to be irrelevant—the use of open-source code is neither necessary nor sufficient for the success of an intervention. All the other attributes, i.e., regulatory enablement, presence of a revenue model, substantial scale, identification of a relevant systemic barrier, presence of prerequisites, and sufficient time, are all necessary and collectively sufficient, but by themselves are not enough to ensure the shift in the Nash that we seek to achieve through the intervention.
There are some interesting additional findings from the Truth Table (Table 3). It can be seen from the table that all failed models had either a revenue model or scale, but not both. Additionally, in the case of ICDS-CAS (and its surviving successor, the POSHAN Tracker), despite all the support they had and continue to have, the reason they have failed to shift the Nash, which in this case would mean nutrition outcomes of children and reduced workload of nutrition-workers was they failed to identify the relevant systemic barrier(s). These could include a lack of trust in the nutrition worker and a lack of understanding of why she has not been able to deliver the desired nutrition outcomes.

4.2. Interpret the Findings from the Truth Table

The finding that the use of open-source software (OSS) is not central to the success of digital initiatives that shift the Nash may appear surprising, given that it is considered to be a definitional part of a digital public infrastructure and a digital public good [28]. However, several risks are associated with the use of OSS, which, in specific circumstances, may make using proprietary software the superior choice. These include:
  • Technical risks such as limited infrastructure, lack of technical expertise, customisation challenges, and data security concerns [29,30,31].
  • Costs associated with initial implementation, ongoing maintenance, opportunity costs, and costs of customisation [31,32,33,34].
  • Legal and compliance issues associated with data privacy, intellectual property, regulatory frameworks, liability, and accountability [35,36,37,38].
For those interested in building large-scale digital interventions that can shift the Nash equilibrium, it would be helpful for them to consult the decision tree presented in Figure 1. It suggests that where there is a gap in local technical capacity and a need for rapid deployment, proprietary software may be the most effective approach. Where a sufficiently large local open-source community exists, good local data protection laws are in place, and there is a necessity for interoperable systems, open-source becomes the obvious choice. However, in all other cases, while open-source may still be a viable long-term option, it will be necessary to invest in developing local open-source communities.
It is also important to note from the results in the Truth Table (Table 3) that while the use of open-source software may not be an essential component for digital intervention to achieve systemic change, all six of the other attributes must be present. If even one of them is missing, the digital intervention would likely fail to achieve its objectives. For policymakers, this reframing offers a practical diagnostic tool: rather than prioritising software attributes such as open-source code, the focus should be on building incentive-compatible ecosystems where digital systems embed into governance practices and create self-reinforcing behaviour.
While our focus has been on identifying conditions under which digital interventions can shift equilibria, it is equally important to recognise the structural constraints that shape outcomes in low- and middle-income countries. Even where technical capacity exists, limitations in state capability, regulatory reach, or institutional trust can slow or reverse progress. Several of our cases demonstrate that high uptake or strong design features are not always sufficient for durable equilibrium change if complementary factors, such as stable governance, local adaptation, and adequate infrastructure, are missing. Acknowledging these constraints does not undermine the contribution of digital interventions; rather, it highlights that their success depends on alignment with the broader institutional and developmental context.
While using QCA as a method, it is important to note that QCA does not establish causality in the same way as large-N statistical studies; rather, it identifies causal configurations, i.e., sets of conditions that together are sufficient or necessary for an outcome [23,27]. This approach foregrounds mechanisms of action rather than treating interventions as “black boxes.” By focusing on conjunctural causation, QCA helps explain why the same intervention can succeed in one context but fail in another, thereby offering insights into how digital interventions may shift equilibria under specific institutional conditions. Our framework complements established evaluation approaches, including Theory of Change and Realist Evaluation, by focusing on whether incentives settle into a stable, self-reinforcing configuration.

5. Limitations

This study is subject to several limitations. We note five that bound the scope of our conclusions:
  • First, the selection of digital interventions was purposive and not representative; cases were chosen based on their visibility and data availability, which may have biased the results toward better-documented or more prominent initiatives. Additionally, the study examines only 13 cases, which limits the generalisability of the findings. While QCA is well-suited for small-N analysis, broader claims would benefit from replication in other sectors or geographical contexts.
  • Second, while crisp-set Qualitative Comparative Analysis (csQCA) enables the identification of necessary and sufficient conditions, it does not offer probabilistic inference or generalizability beyond the sample.
  • Third, outcome coding relies on interpretive assessments of whether a “Nash shift” occurred. Although triangulated with expert opinion and documentation, these judgments are inherently subjective.
  • Fourth, the dataset reflects a survivorship bias: most included interventions are still operational. Failed or abandoned interventions, which might offer valuable counterfactual insights, are underrepresented.
  • Lastly, while this study focuses on structural success, it does not assess unintended consequences or distributional harms, which remain critical in evaluating real-world impact.

6. Use of AI

Artificial Intelligence tools, such as ChatGPT (4.0) and SciSpace (1.5), were used to support the literature review, synthesis, consistency checks across drafts, and refinement of section headings and summaries. No AI-generated content was used in coding the dataset, interpreting empirical results, or drawing analytical conclusions. The authors developed all qualitative interpretations and theoretical framing. The use of AI was restricted to editorial assistance, including copyediting suggestions, formatting consistency, and verifying citation metadata.

7. Conclusions and Policy Implications

This study contributes to the theory of institutional transformation by reframing digital interventions as mechanisms for shifting equilibria. Drawing from game theory, particularly Nash equilibrium, we conceptualise systemic change not as a function of scale or adoption alone, but as the creation of a new stable state in which all actors—users, providers, regulators—have self-reinforcing incentives to remain in the new configuration. By showing that open-source code is neither necessary nor sufficient, we also contribute to debates in digital governance. Technological openness may facilitate reuse or localisation, but unless it is accompanied by capacity, ownership, and incentive-compatible ecosystems, it cannot deliver systemic transformation.
This paper also contributes to the digital governance literature by introducing a conceptual and methodological lens—rooted in game theory and configurational causality—to evaluate whether a digital intervention reconfigures institutional incentives in a stable, enduring way. The Nash equilibrium framework provides a formal structure for thinking about “durable change” and complements existing sociotechnical models that focus on integration and inclusion.
For policymakers, this implies a strategic reorientation. Rather than prematurely focusing on software choices or digital “solutions,” the priority should be identifying system-wide frictions, designing incentive-compatible architectures, and ensuring long-term institutional commitment. Regulatory clarity and local ownership are more important than technological elegance. For funders and implementers, the findings suggest that interventions with partial configurations—e.g., scale without incentives, or funding without alignment—are unlikely to persist. Support should be contingent not just on design quality but on whether the ecosystem as a whole is ready to shift. Ultimately, the promise of digital transformation lies not in digital tools themselves but in their capacity to reshape institutional behaviour. When they do so successfully, they not only improve service delivery—they redefine what is rational, sustainable, and just within the systems they touch. Our findings offer several implications for ITD theory and practice.
First, they confirm that digital interventions can contribute to development not through technological features alone, but by realigning institutional incentives and actor behaviour. Successful interventions, such as Aadhaar, UPI, Chalo, and Swiggy, demonstrate how IT can shift equilibria by reducing transaction costs, improving coordination, and formalising trust relationships. These shifts, in turn, lead to development outcomes, including expanded financial access, improved public health data, and enhanced service delivery.
Second, we show that open-source status, often assumed to promote transparency or replicability, is neither necessary nor sufficient for system-wide change. Instead, regulatory enablement, revenue stability, and a clearly defined systemic barrier are among the key attributes that consistently appear when digital interventions deliver meaningful development impacts.
Third, our analysis illustrates the value of configurational thinking in ITD. Rather than seeking universal best practices, policymakers should assess whether the enabling ecosystem around a digital intervention is sufficient to produce durable results. This is particularly relevant for funders and multilaterals investing in platform-scale reforms.
This study contributes to ITD by offering a framework to evaluate digital interventions not only by their scale or technical design, but by their ability to shift institutional equilibria in a way that supports inclusive development. By using csQCA to identify the necessary conditions for such transformation, we provide both researchers and practitioners with a tool for diagnostic design and strategic evaluation.
Our research suggests that LMIC policymakers should avoid focusing narrowly on open-source tools or user numbers. Instead, they should ask: Does this intervention change the rules of the game? If the answer is yes, and if the enabling conditions are present, digital tools can lead to sustainable and equitable development outcomes.
Our findings can be summarised as a set of five practical rules that can guide policymakers and funders at the appraisal stage:
  • If regulatory backing and a sustainable revenue model are absent, interventions are unlikely to persist, regardless of technical design.
  • If scale is achieved without incentives aligned across actors, interventions may reach large numbers temporarily, but will not shift the equilibrium.
  • If a systemic barrier is not clearly identified and targeted, interventions risk coexisting with legacy systems rather than replacing them.
  • If key prerequisites for the specific intervention (infrastructure, capacity) are missing, early adoption is unlikely to translate into durable transformation.
  • If sufficient time is not allowed, evaluations risk premature conclusions before incentives have stabilised.
These rules, derived from our QCA findings, provide policymakers with a practical checklist for early-stage project assessment. They help distinguish between interventions that are likely to generate lasting systemic change and those that may demonstrate temporary uptake without durable impact.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/info16090714/s1, Figure S1: The design of a concept-based multiple-choice question; Figure S2: Benefits of a Commodity Futures Exchange; Figure S3: ONDC platform structure (source: ChatGPT Image); Figure S4: Risks and Benefits Associated with Different Approaches towards E-Commerce (Source: ChatGPT Image).

Author Contributions

Conceptualisation, N.M.; methodology, N.M.; software, N.M.; validation, N.M., R.R. and D.S.; formal analysis, N.M., R.R. and D.S.; investigation, N.M., R.R. and D.S.; resources, N.M., R.R. and D.S.; data curation, N.M., R.R. and D.S.; writing—original draft preparation, N.M.; writing—review and editing, N.M., R.R. and D.S.; visualisation, N.M.; supervision, N.M.; project administration, N.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article/Supplementary Materials. Further inquiries can be directed to the corresponding author.

Acknowledgments

Naaz Narang for her support in data research.

Conflicts of Interest

Author Ritika Ramasuri is employed by Sattva Consulting. Author Divya Saraf is employed by AdagioVR. All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Nash, J.F. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef] [PubMed]
  2. Smith, J.M.; Price, G.R. The Logic of Animal Conflict. Nature 1973, 246, 15–18. [Google Scholar] [CrossRef]
  3. Taylor, P.D.; Jonker, L.B. Evolutionary stable strategies and game dynamics. Math. Biosci. 1978, 40, 145–156. [Google Scholar] [CrossRef]
  4. Li, J.; Xu, C.; Huang, L. Evolutionary Game Analysis of the Social Co-governance of E-Commerce Intellectual Property Protection. Front. Psychol. 2022, 13, 832743. [Google Scholar] [CrossRef]
  5. Chen, D.; Chen, B. Evolutionary game analysis on decision-making behaviors of participants in mega projects. Humanit. Soc. Sci. Commun. 2023, 10, 921. [Google Scholar] [CrossRef]
  6. Yue, X.; Durrani, S.K.; Li, R.; Liu, W.; Manzoor, S.; Anser, M.K. Evolutionary game model for the behavior of private sectors in elderly healthcare public–private partnership under the condition of information asymmetry. BMC Health Serv. Res. 2025, 25, 181. [Google Scholar] [CrossRef]
  7. Yao, B.; Shanoyan, A.; Schwab, B.; Amanor-Boadu, V. The role of mobile money in household resilience: Evidence from Kenya. World Dev. 2023, 165, 106198. [Google Scholar] [CrossRef]
  8. Avgerou, C. Discourses on ICT and development. Inf. Technol. Int. Dev. 2010, 6, 1–18. [Google Scholar]
  9. Madon, S. e-Governance for Development: A Focus on Rural India; Palgrave Macmillan: London, UK, 2009. [Google Scholar] [CrossRef]
  10. Toyama, K. Technology as amplifier in international development. In Proceedings of the 2011 IConference, Seattle, WA, USA, 8–11 February 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 75–82. [Google Scholar] [CrossRef]
  11. Walsham, G. ICT4D research: Reflections on history and future agenda. Inf. Technol. Dev. 2017, 23, 18–41. [Google Scholar] [CrossRef]
  12. Masiero, S.; Arvidsson, V. Degenerative outcomes of digital identity platforms for development. Inf. Syst. J. 2021, 31, 903–928. [Google Scholar] [CrossRef]
  13. Masiero, S.; Bailur, S. Digital identity for development: The quest for justice and a research agenda. Inf. Technol. Dev. 2021, 27, 1–12. [Google Scholar] [CrossRef]
  14. Veeraraghavan, R. Cat and Mouse Game: Patching Bureaucratic Work Relations by Patching Technologies. Proc. ACM Hum. Comput. Interact. 2021, 5, 21. [Google Scholar] [CrossRef]
  15. Samuel, M.; Doctor, G.; Christian, P.; Baradi, M. Drivers and barriers to e-government adoption in Indian cities. J. Urban. Manag. 2020, 9, 408–417. [Google Scholar] [CrossRef]
  16. Allmann, K.; Radu, R. Digital footprints as barriers to accessing e-government services. Glob. Policy 2023, 14, 84–94. [Google Scholar] [CrossRef]
  17. Djatmiko, G.H.; Sinaga, O.; Pawirosumarto, S. Digital Transformation and Social Inclusion in Public Services: A Qualitative Analysis of E-Government Adoption for Marginalized Communities in Sustainable Governance. Sustainability 2025, 17, 2908. [Google Scholar] [CrossRef]
  18. Zelenkov, Y.; Lashkevich, E. Does information and communication technology really affect human development? An empirical analysis. Inf. Technol. Dev. 2023, 29, 329–347. [Google Scholar] [CrossRef]
  19. Thomas, J.; O’Mara-Eves, A.; Brunton, G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: A worked example. Syst. Rev. 2014, 3, 67. [Google Scholar] [CrossRef]
  20. Ragin, C.C. The Comparative Method; University of California Press: Oakland, CA, USA, 1987. [Google Scholar]
  21. Ragin, C.C. What Is Qualitative Comparative Analysis (QCA)? National Centre for Research Methods (NCRM): Tucson, AZ, USA, 2006. [Google Scholar]
  22. Greckhamer, T. Qualitative Comparative Analysis in Survey Research. In The SAGE Handbook of Survey Development and Application; Ford, L., Terri, A., Eds.; SAGE Publications: Thousand Oaks, CA, USA, 2023; pp. 368–379. [Google Scholar] [CrossRef]
  23. Ragin, C.C. Redesigning Social Inquiry Fuzzy Sets and Beyond; University of Chicago Press: Chicago, IL, USA, 2008. [Google Scholar]
  24. Klarin, T. The Concept of Sustainable Development: From its Beginning to the Contemporary Issues. Zagreb Int. Rev. Econ. Bus. 2018, 21, 67–94. [Google Scholar] [CrossRef]
  25. Manioudis, M.; Meramveliotakis, G. Broad strokes towards a grand theory in the analysis of sustainable development: A return to the classical political economy. New Polit. Econ. 2022, 27, 866–878. [Google Scholar] [CrossRef]
  26. Hanckel, B.; Petticrew, M.; Thomas, J.; Green, J. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: A systematic review of research on public health interventions. BMC Public Health 2021, 21, 877. [Google Scholar] [CrossRef]
  27. Greckhamer, T.; Furnari, S.; Fiss, P.C.; Aguilera, R.V. Studying configurations with qualitative comparative analysis: Best practices in strategy and organization research. Strategy Organ. 2018, 16, 482–495. [Google Scholar] [CrossRef]
  28. DPGA Digital Public Goods Alliance. Digital Public Goods Alliance. 2023. Available online: https://digitalpublicgoods.net/ (accessed on 19 March 2023).
  29. Bostan, S.; Johnson, O.A.; Jaspersen, L.J.; Randell, R. Contextual Barriers to Implementing Open-Source Electronic Health Record Systems for Low- and Lower-Middle-Income Countries: Scoping Review. J. Med. Internet Res. JMIR 2024, 26, e45242. [Google Scholar] [CrossRef] [PubMed]
  30. Archer, N.; Lokker, C.; Ghasemaghaei, M.; DiLiberto, D. eHealth Implementation Issues in Low-Resource Countries: Model, Survey, and Analysis of User Experience. J. Med. Internet Res. JMIR 2021, 23, e23715. [Google Scholar] [CrossRef] [PubMed]
  31. Syzdykova, A.; Malta, A.; Zolfo, M.; Diro, E.; Oliveira, J.L. Open-Source Electronic Health Record Systems for Low-Resource Settings: Systematic Review. JMIR Med. Inform. 2017, 5, e44. [Google Scholar] [CrossRef] [PubMed]
  32. Jayatissa, P.; Hewapathirana, R. Enhancing Interoperability Among Health Information Systems in Low and Middle-Income Countries: A Review of Challenges and Strategies. Eur. Mod. Stud. J. 2023, 7, 334–340. [Google Scholar] [CrossRef]
  33. Yılmaz, E. The Importance and Economic Advantages of Using National Open Source Software in Public Institutions. Int. Sci. Vocat. Stud. J. 2024, 8, 202–210. [Google Scholar] [CrossRef]
  34. Silva, D.G.; Coutinho, C.; Costa, C.J. Factors influencing free and open-source software adoption in developing countries—An empirical study. J. Open Innov. Technol. Mark. Complex. 2023, 9, 100002. [Google Scholar] [CrossRef]
  35. Dunbar, E.; Elizabeth Olsen, H.; Salomon, E.; Bhatt, S.; Mutuku, R.; Wasunna, B.; Edwards, J.; Kolko, B.; Holeman, I. Towards Responsible Data Practices in Digital Health: A Case Study of an Open Source Community’s Journey. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  36. Yeng, P.K.; Diekuu, J.-B.; Abomhara, M.; Elhadj, B.; Yakubu, M.A.; Oppong, I.N.; Odebade, A.; Fauzi, M.A.; Yang, B.; El-Gassar, R. HEALER2: A Framework for Secure Data Lake Towards Healthcare Digital Transformation Efforts in Low and Middle-Income Countries. In Proceedings of the 2023 International Conference on Emerging Trends in Networks and Computer Communications (ETNCC), Windhoek, Namibia, 16–18 August 2023; pp. 1–9. [Google Scholar] [CrossRef]
  37. Gathecha, G.; Ombiro, O.; Shelden, K.; Stake, A.; Murugami, M.; Mungai, E.; Odhiambo, G.; Maree, E.; Muthusamy, R.; Marimuthu, M.; et al. Integrating digital solutions into national health data systems through public–private collaboration: An early experience of the SPICE platform in Kenya. Digit. Health 2023, 9, 20552076231203936. [Google Scholar] [CrossRef] [PubMed]
  38. Kozlakidis, Z.; Kealy, J.; Henderson, M.K. Digitization of Healthcare in LMICs: Challenges and Opportunities in Data Governance and Data Infrastructure. In Digitalization of Medicine in Low- and Middle-Income Countries: Paradigm Changes in Healthcare and Biomedical Research; Kozlakidis, Z., Muradyan, A., Sargsyan, K., Eds.; Springer International Publishing: Cham, Switzerland, 2024; pp. 83–90. [Google Scholar] [CrossRef]
Figure 1. Open-Source versus proprietary decision tree.
Figure 1. Open-Source versus proprietary decision tree.
Information 16 00714 g001
Table 1. Selected digital interventions.
Table 1. Selected digital interventions.
#AcronymFull FormBrief Description
1AADHAARAadhaar Digital Identity PlatformA biometric-based digital identity system that assigns a unique 12-digit ID to every Indian resident. It serves as a foundational identity layer and is used for authentication in banking, welfare, and other services.
2CHALOChalo Smart Bus PlatformA technology platform aimed at digitising and improving the efficiency of bus transport systems in Indian cities. It provides real-time bus tracking and digital ticketing.
3DHIS2District Health Information Software 2An open-source health management information system used globally to collect, manage, and analyse health data, often integrated into national health reporting systems.
4DIGITDigital Infrastructure for Governance, Impact & TransformationAn open-source platform developed by eGov Foundation to support digital service delivery for urban governance in India, offering modules for property tax, water, and sanitation.
5DIKSHADigital Infrastructure for Knowledge SharingAn open-source platform developed by India’s Ministry of Education to support school education through digital content, teacher training, and assessments.
6Ei ASSETEducational Initiatives’ Adaptive Learning PlatformAn adaptive learning tool that personalises learning paths for students based on their performance, aiming to improve foundational learning outcomes.
7ICDS-CASIntegrated Child Development Services-Common Application SoftwareA digital tool designed to support frontline Anganwadi workers by digitising record-keeping, monitoring, and service delivery for child nutrition and early development.
8NCDEXNational Commodity & Derivatives ExchangeA digital agricultural commodities exchange enabling price discovery and risk management for farmers and traders.
9ONDCOpen Network for Digital CommerceAn initiative by the Government of India to create an open, interoperable digital commerce network to break platform monopolies and empower small retailers.
10SORMASSurveillance Outbreak Response Management and Analysis SystemAn open-source digital tool developed in Germany for real-time epidemic and outbreak management, adopted by several LMICs.
11SwiggySwiggyA private-sector food delivery platform that has evolved into a hyperlocal logistics provider for urban consumers.
12UPIUnified Payments InterfaceA real-time payment system developed by the National Payments Corporation of India, enabling instant bank-to-bank transactions via mobile devices.
13X-RoadX-RoadAn open-source data exchange layer developed in Estonia that enables secure and standardised communication between public and private information systems.
Table 2. Classifying the cases into crisp sets.
Table 2. Classifying the cases into crisp sets.
#Digital Interventions (Cases)Open-Source CodeRegulatory EnablementRevenue Model in PlaceSubstantial Scale AchievedIdentifiable Systemic Barrier It Seeks to Eliminate?Presence of PrerequisitesSufficient Time for ImplementationShift in NE (Outcome)
1AADHAAR01111111
2CHALO01111111
3DHIS211011010
4DIGIT11111111
5DIKSHA11100010
6Ei ASSET00100010
7ICDS-CAS10010100
8NCDEX01111111
9ONDC11000000
10SORMAS11011010
11Swiggy01111111
12UPI01111111
13X-ROAD11111111
Table 3. The Truth Table.
Table 3. The Truth Table.
#Open-Source CodeRegulatory EnablementRevenue Model in PlaceSubstantial Scale AchievedIdentifiable Systemic Barrier It Seeks to Eliminate?Presence of PrerequisitesSufficient Time for ImplementationNumber of CasesOutcome
1001000110
2011111151
3100101010
4110000010
5110110120
6111000110
7111111121
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mor, N.; Ramasuri, R.; Saraf, D. Diagnosing Structural Change in Digital Interventions: A Configurational Evaluation Framework. Information 2025, 16, 714. https://doi.org/10.3390/info16090714

AMA Style

Mor N, Ramasuri R, Saraf D. Diagnosing Structural Change in Digital Interventions: A Configurational Evaluation Framework. Information. 2025; 16(9):714. https://doi.org/10.3390/info16090714

Chicago/Turabian Style

Mor, Nachiket, Ritika Ramasuri, and Divya Saraf. 2025. "Diagnosing Structural Change in Digital Interventions: A Configurational Evaluation Framework" Information 16, no. 9: 714. https://doi.org/10.3390/info16090714

APA Style

Mor, N., Ramasuri, R., & Saraf, D. (2025). Diagnosing Structural Change in Digital Interventions: A Configurational Evaluation Framework. Information, 16(9), 714. https://doi.org/10.3390/info16090714

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop