Next Article in Journal
From Granary to Arts Incubator: An Evolutionary Perspective on the Concept of Food for Thought
Previous Article in Journal
Preferences for Alternative Fuel Trucks among International Transport Companies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Social Impact Measurement: A Systematic Literature Review and Future Research Directions

School of Environment, Enterprise & Development, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
World 2023, 4(4), 816-837; https://doi.org/10.3390/world4040051
Submission received: 11 September 2023 / Revised: 21 November 2023 / Accepted: 22 November 2023 / Published: 1 December 2023

Abstract

:
This paper explores the current state of the social impact measurement (SIM) field to better understand common practices in measuring the post-intervention social impact of a program or project and to identify strategies to improve measurement in practice. This study employed a systematic literature review. Articles were manually coded deductively and inductively in NVivo to complete a descriptive and thematic analysis of the literature. The thematic analysis provided an in-depth understanding of the SIM field. We found that similarities existed across the definitions of social impact (e.g., environmental impact is part of social impact). Additionally, social return on investment (SROI) is the most common measurement model and theory of change was identified as a core concept across SIM literature. Strategies are presented for practitioners to consider when measuring social impact, including: (i) engage stakeholders throughout the process, (ii) mobilize existing operational data, (iii) increase measurement capacity, and (iv) use both qualitative and quantitative data. This study reveals the nuances of SIM based on academic literature published across the globe over the span of a decade. It places emphasis on the post-intervention stage and identifies strategies to improve the application of measurement models in practice. Lastly, it outlines future research directions.

1. Introduction

With growing interest in the measurement of social impact, organizations are seeking methods that suit their needs and address pressures from internal and external stakeholders [1,2,3]. On a broader scale, organizations measure social impact to demonstrate their contribution to the achievement of the Sustainable Development Goals (SDGs) [4]. This is especially important as we have reached the halfway mark of the SDG 2015–2030 timeline; according to Sachs et al. [5], global performance has been stagnant without progress since 2019. At an organization level, social impact is measured to evaluate internal progress on goals and to inform strategic decision making [4,6]. Additionally, funders and investors may require organizations to increase the transparency, accountability, and legitimacy of the impact of their programs, projects, and activities through measurement and reporting practices [7,8,9]. Post-intervention measurement is necessary to evaluate the actual versus proposed impact. Additionally, it is important to investigate the measurement of social impact at the project and program levels because this is where implementation often occurs [7]. This is particularly relevant when the interventions are material to an organization’s performance and the achievement of common goals, such as SDGs [10].
Given the maturity of financial accounting, there are well-established national and international rules that organizations must follow when producing financial statements for external users [11]. Over the years, financial reporting has evolved into corporate reporting, which includes financial and non-financial information [12]. When it comes to the multiple dimensions of sustainability (ecological, social, and economic factors), reporting systems and rules are not as mature, nor is there international consensus, partly due to the varying needs of stakeholder groups and the complexity of disclosures [13,14]. Although there are calls for and progress towards harmonizing sustainability reporting standards, the benefit of comparability should not come at the cost of completeness [15]. Measuring and reporting on social impact at the organization level remains largely voluntary [1]. Much like the variety of sustainability reporting standards and frameworks that exist, there are a multitude of models for measuring social impact with a lack of standardization in practice [16]. With the emergence of environmental, social, and governance (ESG) disclosures, there appears to be greater emphasis on how sustainability issues impact the organization rather than on how the organization might impact progress towards sustainability [17].
In this field of research, there are varying definitions and interchangeable uses of key terminology [7,18,19]. Given our focus on post-intervention measurement to understand the impact that programs and projects have on society and the environment, this study defines social impact as the result of a deliberate intervention by an individual, community, or organization with the objective of causing a positive effect on social, environmental, and economic well-being [3,20,21]. Although “evaluation”, “assessment”, and “measurement” are used interchangeably in social impact literature [3], we chose the term social impact measurement (SIM), which is based on the work by McLoughlin et al. [22] and is defined as “the process of defining, monitoring and employing measures to demonstrate benefits created for the target beneficiaries and communities through evidence of social outcomes and/or impacts” [8] (p. 225). This distinction attempts to differentiate social impact assessment (SIA), a pre-intervention planning tool designed to proactively assess the impact a project could have on the community and environment prior to its implementation, from post-intervention measurement tools [23,24]. As our study focused on post-intervention impact, which occurs once implementation is underway, we used SIM to differentiate and to add some clarity around how the practice of measuring social impact at different stages could be defined. Reporting, a final stage of the performance measurement process [25], is used to provide descriptive information about a project, program, or an organization [26] and relies on measurement processes that are verifiable through independent evaluation [14]. Therefore, advancing our understanding of SIM could lead to improved reporting and disclosures of the social impact and benefits created through deliberate interventions.
While there are similarities between evaluation and SIM, the two fields diverge in key areas such as evaluative approaches and analytic methods [27]. Furthermore, it is recognized that SIM emerged from the field of evaluation, leading to two separate areas of research [3]. SIM research provides valuable insights, including the analysis of measurement models [24,28], case studies based on the application of models [29,30], and recommendations for future research [16]. Despite a growing body of literature, there is no consensus on which measurement model should be applied when measuring social impact. With almost 100 identified measurement models [28], a lack of standardization [31], and evaluation complexities [32], implementing SIM practices can be challenging for organizations in all sectors.
The purpose of this study is to capture the current state of the SIM field and to answer the following research question: What are the most discussed practices for measuring the post-intervention social impact of a program or a project? Specifically, we analyzed the following: (i) definitions of impact, (ii) measurement models and tools, (iii) indicators, (iv) measurement drivers, (v) challenges to measurement, and (vi) strategies to improve measurement practices. We employed a systematic literature review (SLR) approach to identify and analyze common themes organized by subtopics. Despite similarities in the topic, our study differs from other reviews (e.g., Corvo et al. [28]) in terms of focus. Beyond identifying measurement models, we sought to better understand what drives post-intervention measurement practices at the project and program level, the challenges assessors face, and strategies to help overcome those challenges.
This study begins by describing the methods used to complete the SLR, followed by the literature analysis and results which are divided into two parts: descriptive analysis and thematic analysis. The paper concludes with an overview of the implications for practitioners, identified gaps in the literature, and future research directions.

2. Materials and Methods

An SLR was selected as an appropriate method to answer the research question [33]. This method allowed us to synthesize a broad range of literature that was identified through a pre-defined search and screening criteria [34,35]. This study followed a narrative synthesis approach by applying a thematic analysis technique to organize main ideas and themes found in SIM literature [36]. To achieve the objectives of this study, the following methodological steps were performed.

2.1. Document Sources

The literature search was conducted across two sources, ProQuest and Scopus, with the aim of capturing a diverse range of articles across multiple disciplines. Both sources provide access to literature from around the world and have advanced search functions to narrow searches to specific publication dates, source types, document types, and languages [37,38]. ProQuest is a multidisciplinary platform that searches up to 46 databases (e.g., ABI/INFORM), producing results from various subject areas [38]. Scopus searches over 80 million documents across 240 disciplines ensuring a breadth of content coverage and offers a comparable source to Web of Science (WoS) with respect to subjects covered and size [39,40]. Therefore, by using both ProQuest and Scopus, a wide variety of literature from multiple disciplines and geographic locations was sourced for screening.

2.2. Search Criteria

To capture a wide selection of SIM literature, inclusion and exclusion criteria were applied [35]. The initial advanced search on ProQuest limited the results to scholarly journals whereas the Scopus search was refined to only include documents labelled as articles and reviews. The document search was limited to English sources published during a 10-year period (January 2012–March 2022), as done by others [41], to identify recent literature since our focus is on the current state of SIM research. No geographic or discipline exclusions were applied to the search criteria.

2.3. Keyword Search and Screening Results

A series of keywords were tested based on the scope of the study and the research question. After several test searches, the following string was selected for the document search in ProQuest and Scopus on 29 March 2022:
Title only (“environment* impact” OR “soci* impact” OR “communit* impact” OR “sustainab* impact”) AND title-abstract-keywords (assessment OR measurement OR evaluation OR “monetary measure”) AND title-abstract-keywords (project OR program OR placement OR organization OR “civil society” OR initiative).
The search in ProQuest and Scopus produced 812 and 1125 document results, respectively. The 1937 documents were subsequently uploaded to Covidence, a systematic review management platform designed to support the scanning process [42]. Upon upload, duplicates were removed, leaving a total of 1407 documents. At this stage, the documents were consolidated and there was no differentiation between the document results from ProQuest and Scopus.
The selection process was initiated by a title scan of the 1407 documents that resulted in the exclusion of 1018 based on fit. The second stage of the selection process was an abstract scan performed by the first reviewer which excluded an additional 273 articles. The remaining 116 articles were either labelled as “yes” or “maybe” by the first reviewer. An additional abstract scan by the second reviewer resulted in the exclusion of an additional 36 articles. The final stage of the selection process was a full paper scan of the 80 remaining articles resulting in 12 more exclusions, leaving 68 articles for deductive and inductive coding.
Articles were excluded based on fit using screening guidance criteria. Specifically, articles were excluded for the following reasons: (i) a focus on pre-intervention impact assessment, life cycle assessment, or social life cycle assessment, (ii) the wrong unit of analysis or subject focus (e.g., individual health or research, and not environment or society), (iii) the environmental or social impact was not the result of a deliberate intervention but an activity/process of industrial operations or the result of the environmental impact of a disaster, and (iv) there was no specific reference to the use or analysis of measurement models or tools, including indicators. Figure 1 outlines the search and screening results of the article selection process using the PRISMA model [43].

2.4. Data Analysis

The 68 articles selected through the process outlined in Figure 1 were manually coded deductively and inductively by a single reviewer using NVivo 20 software.
Deductive coding was conducted using a predetermined list of descriptive variables to gain a general understanding of the current state of research on the topic of SIM. The variables include reference data (i.e., year of publication and journal), methodology data (i.e., research approach, location of study, and theory), and contextual data (i.e., sector focus) [33].
The inductive approach was applied to identify common themes that emerged across the literature during the full-text review process. These findings were subsequently synthesized to form a narrative of the SIM research landscape and to answer the research question [36]. The inductive coding was guided by the following sub-questions developed based on the research question and the authors’ knowledge of SIM:
  • What definitions are provided for social, environmental, sustainable, and community impact?
  • What common measurement methods are discussed and applied?
  • What are the indicators used for measurement?
  • What are the measurement drivers?
  • What are the challenges to measurement?
  • What are the strategies for improved measurement?
The thematic analysis was guided by the research question and sub-questions developed for the inductive coding process [36,44]. Using the sub-questions, we created the following top-level codes: definitions of impact, measurement models (models refers to both frameworks and methods [24]), indicators, drivers for measurement, challenges to measurement, and strategies to improve measurement practices. The content from each full paper scan was organized by top-level codes and the child codes that were identified inductively. Subsequently, the content captured within each code was synthesized based on similarities and differences, ultimately describing key themes across the literature [36,45]. Unique to Section 3.2.2 on measurement models, this study conducted a query search for each child code captured in all of the selected articles. This additional step was completed to confirm the number of articles that each measurement model was discussed or applied in. All of the measurement models discussed or applied (beyond mentioning the name of the model in a table or appendix in the paper being reviewed) in five or more papers were analyzed to help answer our research question.

3. Results

3.1. Descriptive Results

The string search employed for this SLR produced 68 usable articles, 47 empirical studies, and 21 conceptual studies. The articles examined were from multiple disciplines, as seen by the journals of publication. While most journals had only one corresponding article, five articles were published in the journal Sustainability, two in the journal Environmental Impact Assessment Review, and two in the Journal of Social Entrepreneurship. Table A1 shows the range of 62 journals and the frequency of publication for each.
Research in the field of SIM has grown over the period examined in this study, with a high of 18 articles published in 2021. As shown in Figure 2, growth has been steady since 2018, with 70% of the articles in this study published since then. The results for 2022 were based on papers published between January–March due to the timeframe used during the initial search, as noted in Section 2.2.
The 68 articles examined in this study applied a range of research methods, with qualitative (26 articles) as the most common, followed by mixed methods (18 articles) and literature reviews (18 articles). Quantitative methods were applied in three articles. The remaining articles included two perspectives and a debate.
Focusing only on the empirical studies (n = 47), the country of research (i.e., study area) spanned across five continents, with Europe having the greatest representation with 18 articles. This was double the amount of North America, which was second in volume with a total of nine articles. Approximately 10% of the studies applied a global scope by capturing multiple countries from different continents in their research (see Figure 3). Despite no geographic locations being excluded from the search criteria, not all continents were represented in the results. For example, the search did not yield any studies from South America.
Contextual data from the articles were analyzed to gain an understanding of the sector focus for the different studies. Common categories presented themselves, often falling under the umbrella of the public sector and civil society. For example, social enterprise appeared across several different articles as the focal organization type [46,47,48]. Similarly, many articles focused on impact investing [49,50,51], sometimes in relation to the topic of social impact bonds [52,53]. Other notable sectors or sub-sectors included research/education [54,55], food/agriculture [6,56], cities [57,58], healthcare [59,60], and sport [30,61].
Although theoretical perspectives were not evident in all of the articles reviewed, some studies discussed or applied different theories. Several theories were identified across the literature with multiple-constituency theory mentioned in three articles, whereas institutional theory and legitimacy theory were mentioned in two articles each. An additional 14 theories were mentioned in the literature, including agency theory and social impact accounting theory.
For example, Costa and Pesci [20] adopted the term multi-constituency theory, which they recognized as an alternative name to multi-stakeholder theory from Kanter and Brinkerhoff [62]. The authors used this theory to bolster the idea that idiosyncratic measurements, rather than a standardized approach to SIM, may be more suitable to capture different stakeholders’ perspectives. A study by Liston-Heyes and Liu [48] used institutional theory to test hypotheses, and found that social enterprises (SEs) located in London, UK, were more likely to adopt SIM due to normative isomorphic pressures. This study also used legitimacy theory to establish that SEs were more likely to measure social impact when stakeholders were involved in decision making [48].

3.2. Thematic Analysis

The full review of 68 articles produced several themes guided by the six sub-questions presented in Section 2.4, and included (i) definitions of impact, (ii) measurement models and tools, (iii) indicators, (iv) measurement drivers, (v) challenges to measurement, and (vi) strategies to improve measurement practices.

3.2.1. Definitions of Impact

Over 30% of the papers reviewed (21 of 68) referred to how social impact is defined. As noted by other authors, there is no set definition for social impact and its interpretation varies across the literature [7,18]. Among the articles reviewed, three commonly referenced definitions of social impact stood out:
  • “Social impacts include all social and cultural consequences to human populations of any public or private actions that alter the ways in which people live, work, play, relate to one another, organize to meet their needs, and generally cope as members of society” [63] (p. 59) as cited in [21,64,65].
  • “The portion of the total outcome that happened as a result of the activity of an organization, above and beyond what would have happened anyway” [66] (p. 7) as cited in [3,28,67].
  • “A logic chain of results in which organizational inputs and activities lead to a series of outputs, outcomes, and ultimately to a set of societal impacts” [68] (p. 3) as cited in [3,20,21,55].
When examining these definitions chronologically, Burdge and Vanclay [63] (p. 59) did not specify organizations impacting society, instead speaking more broadly to the “consequences to human populations of any public or private actions”. Further, the authors provided a stronger alignment with the definition of sustainable development, which is defined in the Brundtland Report as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs” [69] (p. 41). The latter definitions focus on organizational activities leading to outcomes that ultimately impact society [66,68]. Unique to Clark et al. [66] was the recognition of deadweight (the impact that would have occurred without any intervention), an important variable when determining the causal effects of inputs and activities on outputs and outcomes [70]. Similar to the definition by Burdge and Vanclay [63], Jones and Valero-Silva [71] (p. 364) drew a connection to sustainable development by suggesting that “there are two core elements to social impact: firstly, a focus on a wider set of objectives commonly referred to as the ‘triple bottom line’ [13] of social, economic and environmental impacts; secondly, the concept of well-being [72]”.
Drawing from the papers reviewed in this study, we defined social impact as the result of a deliberate intervention by an individual, community, or organization with the objective of causing a positive effect on social, environmental, and economic well-being. As such, environmental impact is explicitly included within our definition of social impact.
Given the reference to multiple forms of impact (e.g., environmental or economic) within the social impact definitions noted across the literature [20,71], definitions for other types of impact were less common. For example, the few papers that placed emphasis on community or collective impact did not provide explicit definitions [73,74]. Conversely, the study by Bottero et al. [57], which experimented with a community impact evaluation method, defined financial, fiscal, economic, social, cultural, and environmental impact as part of their urban regeneration assessment matrix.

3.2.2. Measurement Models and Tools

Much like the semantics around measurement terminology, landing on a term that clearly describes the process used to measure impact is also challenging. Grieco et al. [24] (p. 1180) used the word “model” to “indicate those instruments entrepreneurs can rely on in measuring their social impact”. Similarly, Corvo et al. [28] (p. 6) used “model” to “mean a named, documented process that is used to assess either the actual social and/or environmental impact”. Therefore, this study followed suit and used the term “model”.
Studies by Corvo et al. [28] and Grieco et al. [24] analyzed measurement models found in the literature and organized them into clusters based on common characteristics to categorize the different ways social impact can be measured. Through this process, Grieco et al. [24] identified four clusters: (1) simple social quantitative—quantitative data, focus on social impact, and a retrospective timeframe; (2) holistic complex—quali-quantitative data, holistic impact, and an ongoing/retrospective timeframe; (3) qualitative screening—qualitative data, holistic impact, and a retrospective timeframe; and (4) management—quali-quantitative data, holistic/triple bottom line impact, and an ongoing timeframe.
The study by Corvo et al. [28] aimed to capture recent work in their systematic approach to identify measurement models that have emerged since the time of the study by Grieco et al. [24]. In doing so, Corvo et al. [28] analyzed six studies that mapped SIA models and identified the three most common cluster categories: (1) performance/management models relate to the evaluation of performance goals using logic frameworks, project management, and managerial certifications; (2) auditing/quality models support stakeholder accountability by assessing the process an organization follows to carry out activities and the results of those activities; and (3) monetization models aim to quantify impact and convert results into monetary terms. As a second part of their study, Corvo et al. [28] identified 22 new models that were not captured in the previous mapping study by Grieco et al. [24] (p. 11), leading the authors to suggest that a fourth cluster could exist related to sustainability “determined by the emergence of new systems in finance requiring specific metrics and in relation to the global agenda towards sustainable development”.
Building on these works, our study identified several similar models relevant to post-intervention and organized them in two different ways. For SIM models that were discussed or applied in five or more papers, we provided a description in the following section and a summary of the characteristics (as shown in Table 1), including the categorization of models based on the descriptions presented by Corvo et al. [28] and Greico et al. [24]. For SIM models that appeared in less than five papers, a list of 68 models is presented in Table A2. That list also contains the four models discussed below.
  • Social Return on Investment (SROI)
SROI was the most common measurement model discussed in the literature and is described as an evaluation technique used to measure the value of social impact achieved through an intervention [67]. This measurement model can also be used as a forecast tool to predict future social value through impact [30]. SROI is referred to as a monetization model that employs a six-stage process to calculate the social return on financial investment, drawing from concepts that are familiar within the social impact tradition and the field of finance, including cost−benefit analysis [18,28,30,78]. SROI is calculated by computing the value of impact of each intervention outcome using a series of four estimated variables: (1) deadweight, the outcome that would have occurred without the intervention; (2) displacement, the negative effect caused by the intervention; (3) attribution, the contribution from other organizations to the impact; and (4) drop-off, the depletion of impact over time. Impact is then computed by calculating the net present value (NPV) of each outcome using a formula that includes the aforementioned estimated variables. Once the NPV of outcomes is calculated, it is divided by the NPV of the investment to provide the SROI value [30].
A participatory approach is applied with SROI and the model requires both technical expertise and access to large volumes of data [20,28]. Despite a well-established methodological approach to impact measurement, SROI is not fully standardized for comparability (i.e., indicators are unique and developed by the practitioner with support from stakeholders) [79]. Further, limitations exist due to the level of personal judgement applied during the measurement process [30]. Lastly, practitioners applying the SROI model should have a general understanding of theory of change (ToC) and logic model development as these tools are applied in the early stages of the SROI process [30].
Although Grieco et al. [24] categorized SROI in the “simple social quantitative” cluster, a study by Jackson and McManus [29] noted the complexity, time commitment, and level of expertise required to carry out the measurement process. This was, in part, due to the level of qualitative and quantitative analysis required [29]. Additionally, Jackson and McManus [29] stated that while the stakeholder engagement process was time consuming, it ultimately contributed to the quality of the assessment. While Dufour [64] also recognized the complexity and time intensiveness of SROI, the author acknowledged that the creators of the model made efforts to provide resource support, including a comprehensive guide [80].
2.
Impact Reporting and Investment Standards (IRIS+)
IRIS+ provides a standardized set of open-access metrics that can be used by an organization or its stakeholders to measure impact [67,79]. Although IRIS+ provides users with a standardized approach that allows for comparability between organizations, limitations exist pertaining to the applicability of the metrics [79]. For example, the case study by Mulloth and Rumi [79] (p. 17) found that “the current range of metrics are limited in scope, and [the organization’s founder] did not feel that the 40 applicable metrics captured the social impact of [their organization] in a comprehensive way, particularly because 32 of the 40 metrics were descriptive metrics of the organization’s characteristics, rather than its impact”. Further, according to Phillips and Johnson [50], the adoption of IRIS+ as an impact measurement tool has not received much uptake in some countries. For example, the authors noted that based on their study, the IRIS+ metrics were not used in the Canadian social impact investing community.
3.
Global Reporting Initiative (GRI)
GRI is a complex reporting framework that provides disclosure guidelines across four categories: environment, social, economic, and governance to support the measurement process [24]. In an analysis of measurement models relevant to their case study, Mulloth and Rumi [79] (p. 5) noted the customizable nature of GRI standards, as organizations “were given free range to determine which topics to apply to their organizations”. However, GRI standards did not score well on the relevance criteria for a micro-enterprise development organization seeking to measure their social impact [79]. This could be due to the generalizability of the GRI standard, which is applicable across sectors, organization sizes, and business models [9,79]. Although some authors identified GRI as a measurement model as it provides measurement guidelines [24,28,79], it is debatable given GRI’s emphasis on reporting, rather than the measurement aspect of non-financial performance [67].
4.
Social Accounting and Audit (SAA)
SAA follows a holistic approach to produce a report that demonstrates the social, environmental, and economic impacts of an organization [18,24]. The SAA process requires an organization to define its impact mission, engage stakeholders, and collect qualitative and quantitative data for reporting purposes [18]. Although regulations do not exist, reports can be audited [18] and SAA is a useful model for funding agencies seeking accounts of impact [48].
Table 1 provides descriptive characteristic details of each measurement model discussed, including the type of measurement tool employed, applicable sectors, scale of application, and the categorization of models, based on the descriptions presented by Greico et al. [24] and Corvo et al. [28].
ToC is recognized as an analytic framework that is designed to document the change process of an intervention; it was commonly discussed across the literature reviewed in this study [3,20,30,59,67,71,79]. As ToC is a participatory framework that documents the impact pathway and captures the causal relationships between inputs and outcomes, stakeholder involvement is integral to the process [30,54,67]. Although ToC is closely tied to logic model development, it is not recognized as a measurement model [24,28], but instead as a tool to help establish the impact (or change) that is sought to be measured [3,30], whereas the logic model framework seeks to demonstrate the different stages of the change process by identifying the inputs, activities, outputs, and outcomes that lead to impact. The logic model framework can serve as a foundation for measurement model development or it can be used as a customizable measurement model, provided it includes indicators and a monitoring plan [2,3,24,28,47,51,64].

3.2.3. Indicators

The terms metrics and indicators are used interchangeably in the literature and tend to carry similar meanings. As noted in Table 1, some measurement models used the term metrics while others used indicators. For this study, we used the term indicator, which is defined as a means to measure the level of change resulting from a planned intervention designed to create a social impact. Some models provide a suite of indicators for practitioners to choose from (e.g., IRIS+), while others require the indicators to be developed by the practitioner, often with input from stakeholders (e.g., SROI). Regardless of the model, selecting the right indicators is an important part of the SIM process and is dependent on the availability of data [28,57].
The literature highlighted two different approaches when measuring impact: standardized and customized. Armstrong et al. [73] (p. 9) stated that a “universal metric is a method of measuring impact that is applicable and comparable in all situations”, offering reduced cost and complexity for the assessor. Despite these benefits, Lazzarini [81] (p. 135) pointed out that “standardized tools do not address the issue of causality, that is, whether changes in the target populations of these firms (if any) were caused by the actions that the firms themselves conceived and implemented”. The option is to measure social impact either through a standardized approach that promotes comparability of indicators or by developing customized indicators based on stakeholder needs [20].
Ruff and Olsen [32] (p. 404) presented a slightly different perspective by proposing that SIM models could apply a “bounded flexibility” approach to indicator selection, which “creates comparability by focusing on the commonality of the construct itself rather than on differences in the indicators used to define and measure it”. This approach can be applied by selecting a common element of the construct being measured (e.g., “new jobs”), where the “organizations choose the definitions, counts, and measures that are most relevant to them from a prescribed (bounded) set of options” [32] (p. 404). The UN SDGs, for example, provide a good selection of common indicators that practitioners can use as a starting point when developing their own customized SIM indicators [32]. This provides the opportunity for comparability, the appeal of a standardized approach, while allowing practitioners and stakeholders to measure in a way that suits their unique needs. In other words, there is room for both the standardization and customization of indicators when measuring social impact.
Throughout the literature reviewed, SDG indicators were the most discussed. For example, Lazzarini [81] expressed that SDGs indicators have recently been used by entrepreneurs and impact investors seeking to measure impact. Additionally, Aiello et al. [7] stated that SDGs are a main reference point when measuring the social impact of research at the international level. Similarly, Alomoto et al. [4] mentioned SDGs as an international reference point used to guide the development of measurement models. These ideas were supported by Kah and Akenroye [21] (p. 384), who shed light on the common reference to SDGs, stating that “[f]rom a global perspective, there is a renewed opportunity for organisations to capture their contributions to the United Nations Sustainable Development Goals (UN SDGs) agenda”.
The literature did not provide extensive insights into the types of indicators used; however, two notable themes emerged. First, the concept of performance indicators, which were used in measurement models such as IRIS+ [79]. According to Tse and Warner [52], performance indicators are essential when measuring social impact bonds as a financing mechanism for social service delivery. Additionally, Calvo-Mora et al. [82] (p. 1270) explained that “performance indicators are internal measures that the organisation uses to supervise, understand, predict, and improve the performance of the management of its impact in society and to predict its perception”.
Second, there was the discussion around the use of output indicators and outcome indicators, which is related to logic model development [3,30,71]. Williams [53] (p. 8) stated that “‘outputs’ are transactional and retrospective in nature, ‘outcomes’ are transformational and future-oriented”. Specifically, outputs are quantifiable results of a program’s performance (e.g., number of participants enrolled in a program), whereas the term outcome is used to describe a change that occurred as a result of the output (e.g., the program participant benefited from their enrollment) [53]. Costa and Andreaus [47] provided a similar framing, suggesting that output indicators captured an immediate or short-term result of an input, whereas outcomes related to the medium or long-term results of the intervention. Additionally, the authors mentioned that output indicators could be directly accessed by the organization, making output data easier to gather, whereas outcome indicators required data from outside the organization (e.g., from a beneficiary) [47].
A study by Jackson and McManus [29] outlined the SROI process and provided examples from an organization that applied the model to measure the social impact of its program. The authors reinforced the importance of outcome indicators being defined by stakeholders. Additionally, they provided example indicators for both outputs and outcomes, demonstrating that outcome indicators were based on numerous output indicators combined [29]. This suggests that a single output does not necessarily lead to an outcome. For example, the number of participants receiving training (output) does not necessarily lead to the participant improving their sense of purpose (outcome). Therefore, when seeking to measure impact, although outcome indicators are more complex, they will provide more meaningful information on how the outputs have led to change.

3.2.4. Measurement Drivers

Although the reasons for measurement differed, the common themes that emerged across the literature were organized into two different categories. First, there were internal drivers, including the pursuit of continuous improvement for organizational performance. Second, there were external pressures, which included demands from funders, investors, and other stakeholders. Table 2 is organized by internal and external drivers, with details about how SIM supports practitioners. The external drivers are separated based on stakeholder group type. Stakeholder group one included measurement demand from funders and investors, whereas the second group highlighted pressure from other stakeholders.

3.2.5. Challenges to Measurement

Despite the different drivers and many benefits of SIM, challenges persist. Several articles reviewed in this study mentioned the challenges that evaluators and organizations faced when seeking to measure social impact. The most common challenges identified across the literature, with 15 articles each, were related to the complexity of measuring social impact and the absence of standardized measurement models and tools. Additional challenges discussed included the lack of data or data systems (11 articles) and insufficient financial, human, and time resources (7 articles).

3.2.6. Strategies to Improve Measurement Practices

Even though the strategies identified in the literature did not mirror all the challenges mentioned in Section 3.2.5, the themes that emerged could serve as useful insights to those seeking to measure or support the measurement of social impact.
The first theme was stakeholders, one that was commonly referenced in the literature, with Barraket and Yousefpour [46] emphasizing the importance of understanding stakeholder needs when measuring social impact. Costa and Pesci [20] suggested that this could be achieved by engaging with stakeholders and including their input when selecting indicators. Additionally, Dufour [64] identified stakeholder involvement as a key aspect of SIM, recognizing their role in informing the evaluation process through active participation. Lastly, a study by Mackenzie and Davies [87] followed a co-design approach to develop an indicator framework with stakeholders to measure the impact of food sharing initiatives.
The second theme was connected to the lack of data and data systems challenge. To address this, Ruff and Olsen [32] proposed the use of operational data when measuring social impact. However, Ruff and Olsen [32] (p. 404) stated that the usefulness of operational data when measuring social impact is dependent on “causal mechanisms, theory, and alternative explanations”, all of which are part of the SROI model [67]. This means that while operational data can be useful for measuring social impact, not all operational data will be applicable. As Ruff and Olsen [32] suggested, the selection of operational data used in the measurement process must be well thought out, so that it captures causality and deadweight (i.e., outcome that would have occurred without the intervention) [30].
The third theme, related to the lack of resources challenge, was organizations facing a lack of capacity to effectively measure social impact [2]. Pawluczuk et al. [88] recognized that additional time allocated for the evaluation process to be conducted effectively is required. It is suggested that specialized external experts can help organizations with capacity challenges, although this will require additional financial resources. Ruff and Olsen [32] (p. 403) made the case for customized measurement through the engagement of “a cadre of impact analysts capable of interpreting reports”. Additionally, Polonsky et al. [89] (p. 94) suggested that “to maximise transparency and impartiality, one potential solution is that social impact could be assessed by independent third parties, allowing these evaluation organisations to develop specialised skills in this area and spread this expertise across NPOs and funding bodies”. Whether an organization develops the internal capacity to measure social impact or brings in external parties to support their efforts, increasing the capacity to collect, analyze, and interpret data will improve SIM.
While important, access to data and the ability to manage it is only part of the data challenge. To compute indicators, practitioners need access to the right kind of data. The fourth theme that emerged was the quality of data and its usefulness. The value of storytelling to enhance the measurement of social impact was referenced across the literature [50,89]. However, qualitative data alone are not enough. Polonsky et al. [89] identified storytelling as a qualitative measure to supplement quantitative data when measuring the social value created by an initiative. Ferraro and Hanauer [70], who although hesitant, weighed in on the quantitative versus qualitative debate as it related to causality. The authors stated that “it would be hard (but not impossible) to establish a credible claim of causality using only qualitative data, whereas we believe one can establish a credible claim with only quantitative data” [70] (p. 505). In other words, practitioners should use mixed-methods, not solely relying on qualitative data.
The final theme was SROI, the measurement model most commonly cited in SIM literature. As such, it is a model that can be used as a starting place for practitioners looking to implement a model for SIM. While SROI has its benefits (i.e., promotes engagement with stakeholders, includes quantitative and qualitative analysis, useful organizational learning tool), it also has some drawbacks [29,60]. Most notably, it is complex, time consuming, resource intensive, and it does not guarantee comparability. Despite these drawbacks, SROI remains a viable measurement model for those seeking a solution that offers uniformity to its SIM approach, with the flexibility to customize indicators based on idiosyncratic needs [30,79]. It is important to note that the SROI model does not offer indicator uniformity. Instead, it is the responsibility of the practitioner to develop or select indicators that align with the social impact they are seeking to measure.

4. Discussion and Conclusions

This study conducted an SLR to identify and analyze relevant research published in the last 10 years to answer the following research question: What are the most discussed practices for measuring post-intervention social impact of a program or a project? Specifically, this research focused on post-intervention impact measurement at the project and program level. We found that existing literature spanned across multiple disciplines, and empirical research has been conducted across the globe with greater focus on European and North American countries. Our thematic analysis provided an understanding of key definitions, commonly used measurement models, different types of indicators, measurement drivers, challenges to impact measurement, and strategies to improve measurement practices.
By defining social impact, we were able to identify key concepts that related to the topic. Primarily, these included deliberate intervention, causality, and positive outcomes related to social, environmental, and economic well-being. Further, we narrowed down terminology that referred to the process of measuring social impact by using the term SIM, which differentiates the practice in question from SIA, a planning tool designed to proactively assess impact. By applying a thematic analysis to the selected literature, this study identified strategies that practitioners can follow to enhance their evaluation efforts and areas for future research to further advance the SIM field.

4.1. Implications for Practitioners

The results of this research are relevant for practitioners who wish to measure the post-intervention social impact of planned projects and programs. To improve SIM practices, our study suggests that practitioners should consider the following:
  • Engaging stakeholders throughout every stage of the SIM process, most importantly in the planning stages, in particular when it comes to identifying outcome indicators.
  • Gaining an understanding of data that are available and working to develop a plan on how to use this data to measure post-intervention social impact, as well as identifying any data gaps that exist.
  • Working to increase organizational capacity in the competencies necessary to gather useful data as well as analyze and manage this data, or engage with third-party experts.
  • Collecting both quantitative and qualitative data, given the relevance of both types when measuring social impact.
  • Exploring SROI for use, or adapting elements of it to use, as an SIM model. A comprehensive guide has been published for practitioners seeking to utilize the SROI measurement model [80]. As there may not be a “one-size-fits-all” measurement model, practitioners may choose to adapt the SROI model to suit their needs or seek out an alternative model to measure social impact.
  • Using the SDGs indicators as they stand or as a starting point for developing their own customized indicators. While comparability and the ability to aggregate data are important, practitioners and stakeholders may require flexibility to customize indicators that represent unique contexts when measuring social impact.
  • When selecting or developing indicators, practitioners should ensure their indicators include both output and outcome indicators.
  • The presence of ToC across the literature suggests its relevance in the SIM field. ToC is especially important to practitioners applying the SROI model as it is applied in the early stages of the measurement process.
  • Funders and policymakers often have a significant impact on whether organizations have the capacity to be able to spend the time and resources needed to effectively design and implement their chosen SIM model. As such, it is essential, should they hope to see social impact measured effectively, that funders and policymakers invest in the capacity of organizations to implement the above recommendations.

4.2. Limitations and Future Research

This study revealed gaps in SIM literature. Additionally, the limitations of this study led to unexplored topics in the SIM field. For example, by focusing the literature search to only include documents labelled as articles and reviews, book chapters and grey literature were not captured. Further, by filtering to only include English sources, publications in other languages were not included in the article selection process. The exclusion criteria for article selection were established to examine the current state of the field of SIM. This scoping naturally limited the breadth and depth of articles included in this review. The study was also limited by the deductive coding categories and the scoping of the inductive coding using sub-questions.
As such, the following suggestions are presented for consideration as areas for future research:
  • The search criteria used in this SLR resulted in the absence of study areas from certain countries and continents. For example, while the articles used in this SLR included study areas from across the globe, countries in Central America and South America were not represented, despite no geographic exclusions being applied to the search criteria. Future research can help to address this gap by including non-English sources in the original article search or by including specific geographic regions as keywords in the string search.
  • To source a wide variety of literature from multiple disciplines and geographic locations, the articles included in the screening process for this SLR were sourced from ProQuest and Scopus. While the databases used in this study are comparable to WoS with respect to the number of journals indexed and categories of research covered, future SLRs on SIM should consider increasing the number of document sources to include additional databases like WoS. By including multiple databases in the initial search, future research may capture additional research disciplines or countries, adding to the diversity of the study.
  • The keywords searched when sourcing documents and the use of exclusion criteria during the article selection process posed limitations to the breadth of the literature reviewed in this study. For example, this SLR focused on the broader impact of a project or program intervention on society and the environment. Thus, specific areas such as health and well-being impact, which are relevant to social impact, were not included within the keyword search of this study. Impact measurement plays an important role in determining the success of healthcare projects, programs, and organizations [90]. As such, studies in the field of health offer valuable contributions to the academic discussion on SIM and merit being explored through future research.
  • By using sub-questions for our thematic analysis, we did not capture certain topics. For example, we did not analyze content related to empirical evidence on the shortcomings of the measurement models used in practice. Further, the thematic analysis of measurement models was based on the models that appeared with the greatest frequency, leading to the less common measurement models not being explored in this study. Future research could address these gaps.
  • Despite a large variety of measurement models identified in this study, the literature reviewed did not present a wide selection of indicators commonly used in practice to measure social impact. Which indicators are most applied in practice? Are outcome indicators adequately included in practice? Do organizations select from existing open-source indicators or are indicators developed internally? Are stakeholders engaged in the indicator selection process? To address the gap related to indicators and support those seeking standardization, future research can be carried out to inventory the indicators used in practice.
  • While this study identified many different measurement models across the literature, empirical evidence on the level of social impact achieved through different projects and programs was lacking. Given that SROI is the most common model used to measure social impact, is there evidence that it effectively measures the level of progress and achievement of the intervention that is designed to lead to social impact? How have deliberate interventions created social impact? Does social impact at the individual, organization, or community level lead to systemic change [91]? Roy et al. [92] raised similar points when calling for an increase in empirical research to examine the causal relationship between social enterprise activities and improved health and well-being.
  • Although theory was not thematically analyzed as part of this study, the descriptive analysis provided some insights into the theoretical framings used across the literature reviewed. Organizational theories were most common, often used to examine the adoption of SIM practices by organization types. Theories that appeared with less frequency warrant further investigation. Specifically, the general theory of social impact accounting that was proposed by Nicholls [93] and cited in Kim Man Lee et al. [55], as it incorporates important themes to SIM such as stakeholders, types of data, and materiality. Which theories can be used to better understand SIM? How does SIM differ from financial measurement and organization-centric non-financial measurement?
  • The harmonization of sustainability reporting standards is a contemporary topic that can be informed by a maturing SIM field. While this study established the importance of engaging stakeholders throughout the measurement process, the development of future sustainability reporting standards appears to be focused more narrowly on investors needs [17]. The emergence of ESG disclosures is an example of investor focused reporting that may not take a double materiality approach to capture both the risk of sustainability issues on the organization and the organization’s impact on the economy, society, and environment [17,94]. How can the strategies to improve measurement practices identified in this study contribute to the academic discourse on sustainability reporting standards? Are sustainability reporting standards capturing the risk and impact of projects and programs that are material to the organization and stakeholders? How can theory of change and logic model, concepts inherent to SIM, contribute to the integration of double materiality in sustainability reporting standards [15]?
  • The terms “evaluation”, “assessment”, and “measurement” are used interchangeably in social impact literature; however, they do not hold equal meaning [3]. Naturally, the scoping of this SLR may have led to the omission of articles on complementary topics from multiple disciplines. How does semantics impact our understanding of a field of research? How can the use of certain terminology hinder a multidisciplinary approach to research? How can studies on carbon accounting [95] and accounting for sustainable development [96] help advance the quality of SIM in practice?
  • Given the sheer volume of measurement models that exist and the debate on the possibility of an ideal way to measure social impact [21,97], research gaps remain in the SIM field. While there may not be a “one-size-fits-all” approach to SIM, standardization of measurement models and indicators could provide the ability to calculate collective impact towards transformational goals like SDGs. Is it possible to develop a deeper understanding of which measurement model is most suitable for an industry, organization, or program by examining the characteristics of organizations that use a specific measurement model? Answering this question will require empirical studies across sectors and organization types, indicating an important approach for future SIM research.

4.3. Contributions and Concluding Thoughts

This study contributes to the growing field of SIM. Specifically, it complements the work by Corvo et al. [28] and Grieco et al. [24], who analyzed and categorized existing measurement models. While similarities exist in these studies, our research provides additional insights on post-intervention measurement, which aims to support practitioners in better understanding the impact that programs and projects have on society and the environment. Additionally, our descriptive analysis shows that the literature in this field is growing and comes from a wide range of disciplines. Lastly, our descriptive analysis identified a variety of theories applied across the studies, with organizational theories being the most common.
Our thematic analysis attempts to demonstrate the nuances of SIM and identify a pathway forward for practitioners as they work to measure the social impact of projects and programs. Through this approach, we placed emphasis on the practitioner by identifying the challenges they face, namely the complexity of measuring social impact; the absence of standardized measurement models and tools; the lack of data or data systems; and insufficient financial, human, and time resources. Then, we identified strategies to improve SIM in practice, specifically: (i) engage stakeholders throughout the process, (ii) mobilize existing operational data, (iii) increase measurement capacity, and (iv) use both qualitative and quantitative data. In addition to presenting implications for practitioners, we provide direction on future research in this field. As we approach the 2030 goal post for SDGs, an accurate understanding of our progress depends on the availability of data and the quality of measurement practices.

Author Contributions

Study conceptualization, I.D. and A.C.; methodology, L.F., A.C. and I.D.; data collection, L.F.; formal analysis, L.F.; writing—original draft preparation, L.F. and I.D.; writing—review and editing, L.F., A.C. and I.D.; supervision, A.C. and I.D.; project administration, I.D.; funding acquisition, I.D. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Employment and Social Development Canada (ESDC) Contract No. 4500009111. We would like to thank ESDC and the University of Waterloo, Youth and Innovation Project team for their support.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Published academic articles were analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Journals of publications for the articles included in the SLR.
Table A1. Journals of publications for the articles included in the SLR.
Journal Name# of
Articles
Journal Name# of
Articles
American Journal of Evaluation1Journal of Rural Studies1
Annals for Istrian and Mediterranean Studies1Journal of Small Business and Enterprise Development1
Annals of Global Health1Journal of Social Entrepreneurship2
Annual Review of Environment and Resources1Journal of Sport Management1
Australian Journal of Public Administration1Journal of the International Society for Southeast Asian Agricultural Sciences1
Canadian Journal of Administrative Sciences1Journal of Urban Affairs1
Canadian Journal of Nonprofit and Social Economy Research1Land1
Cities1Management and Marketing1
Community Psychology in Global Perspective1Marketing Intelligence and Planning1
Conservation Biology1Media and Communication1
Cultural Trends1MethodsX1
Development Engineering1Michigan Journal of Community Service Learning1
Development Southern Africa1Nonprofit and Voluntary Sector Quarterly1
Entrepreneurship Theory and Practice,1Public Management Review 1
Environmental Impact Assessment Review2Qualitative Research in Accounting and Management1
Evaluation and Program Planning1Quality—Access to Success1
Evidence and Policy1RAUSP Management Journal1
Globalization and Health1Renewable Agriculture and Food Systems1
Habitat International1Research Evaluation1
Historical Social Research 1Research in International Business and Finance1
IPE Journal of Management1Science, Technology, and Human Values1
Irish Journal of Sociology1Social and Environmental Accountability Journal1
Journal of Business Ethics1Social Enterprise Journal1
Journal of Cleaner Production1Social Indicators Research1
Journal of Cultural Heritage Management and Sustainable Development1Sustainability5
Journal of Environmental Planning and Management1Sustainability Accounting, Management, and Policy Journal1
Journal of Extension1The Foundation Review1
Journal of International Agricultural and Extension Education1The Journal of Arts Management, Law, and Society1
Journal of Librarianship and Information Science1Total Quality Management, and Business Excellence1
Journal of Management1Voluntas1
Journal of Public Budgeting, Accounting, and Financial Management1World Development1
Table A2. Measurement models found in SLR articles.
Table A2. Measurement models found in SLR articles.
Measurement Model
1Adaptive social impact management35Multi-dimensional controlling model
2Added value and scalability model36Multiple-constituencies approach
3ASIRPA37Ongoing assessment of social impacts (OASIS)
4Balanced scorecard38Outcome evaluation approach
5Best available charitable option (BACO)39Payback framework
6Business Impact Assessment (BIA)40Post-project reflection
7CFICE—Accounting for impact41Programme evaluation
8Charity analysis framework42Public action value chain (SGMAP)
9Co-axial matrix43Public value mapping (PVM)
10Community capitals framework44Public value scorecard
11Community Impact (CI) Evaluation45Quality evaluation of creative placemaking
12Community impact evaluation (CIE)46Question-and-evidence matrix
13Community score card47RAPID Outcome Assessment
14Composite measure for the social values of sport (CMSVS)48RCT-based evaluation
15Conceptual assessment framework49REF (H2020)
16Contribution analysis50Ripple effect mapping
17Cost benefit analysis (CBA)51SAFA—Sustainability Assessment of Food and Agriculture Systems
18Data-collection tool development52SHARE IT
19Kirkpatrick model53SIAMPI
20Ecological footprint54SIMPLE
21Economic survival framework55Social accounting and audit (SAA)
22European Foundation for Quality Management (EFQM)56Social enterprise balanced scorecards
23Eco-Management and Audit Scheme (EMAS)57Social footprint measurement
24Evaluation grids58Social impact assessment (SIA)
25Evaluative framework 59Social impact measurement model
26Framework for participatory impact assessment60Social performance framework
27Fuzzy comprehensive evaluation (FCE)61Soft outcome universal learning (SOUL)
28Global impact investing rating system (GIIRS)62Social return on investment (SROI)
29GPS for social impact63Structural equation modelling
30Global Reporting Initiative (GRI)64Sustainability quick-check model
31Impact reporting and investment standards (IRIS)65Sustainable balanced scorecard
32Impact value chain66TBL accounting
33Logic model development67The B impact rating system
34Measuring impact framework68VIS

References

  1. Arvidson, M.; Lyon, F. Social Impact Measurement and Non-Profit Organisations: Compliance, Resistance, and Promotion. Volunt. Int. J. Volunt. Nonprofit Organ. 2014, 25, 869–886. [Google Scholar] [CrossRef]
  2. Frantzen, L.; Solomon, J.; Hollod, L. Partner-Centered Evaluation Capacity Building: Findings from a Corporate Social Impact Initiative. Found. Rev. 2018, 10, 7–25. [Google Scholar] [CrossRef]
  3. Vermeulen, M.; Maas, K. Building Legitimacy and Learning Lessons: A Framework for Cultural Organizations to Manage and Measure the Social Impact of Their Activities. J. Arts Manag. Law Soc. 2020, 51, 97–112. [Google Scholar] [CrossRef]
  4. Alomoto, W.; Niñerola, A.; Pié, L. Social Impact Assessment: A Systematic Review of Literature. Soc. Indic. Res. 2021, 161, 225–250. [Google Scholar] [CrossRef]
  5. Sachs, J.; Kroll, C.; Lafortune, G.; Fuller, G.; Woelm, F. Sustainable Development Report 2022; Cambridge University Press: Cambridge, UK, 2022; ISBN 978-1-00-921008-9. [Google Scholar]
  6. Elauria, M.; Manilay, A.; Abrigo, G.N.A.; Medina, S.M.; Reyes, R.B. Socio-Economic and Environmental Impacts of the Conservation Farming Village Project in Upland Communities of La Libertad, Negros Oriental, Philippines. J. Int. Soc. Southeast Asian Agric. Sci. 2017, 23, 45–56. [Google Scholar]
  7. Aiello, E.; Donovan, C.; Duque, E.; Fabrizio, S.; Flecha, R.; Holm, P.; Molina, S.; Oliver, E. Effective Strategies That Enhance the Social Impact of Social Sciences and Humanities Research. Evid. Policy A J. Res. Debate Pract. 2021, 17, 131–146. [Google Scholar] [CrossRef]
  8. Nguyen, L.; Szkudlarek, B.; Seymour, R.G. Social impact measurement in social enterprises: An interdependence perspective. Can. J. Adm. Sci. Rev. Can. Des Sci. De L’administration 2015, 32, 224–237. [Google Scholar] [CrossRef]
  9. Trautwein, C. Sustainability Impact Assessment of Start-Ups–Key Insights on Relevant Assessment Challenges and Approaches Based on an Inclusive, Systematic Literature Review. J. Clean. Prod. 2021, 281, 125330. [Google Scholar] [CrossRef]
  10. Fiandrino, S.; Scarpa, F.; Torelli, R. Fostering Social Impact Through Corporate Implementation of the SDGs: Transformative Mechanisms Towards Interconnectedness and Inclusiveness. J. Bus. Ethics 2022, 180, 959–973. [Google Scholar] [CrossRef]
  11. Phillips, F.; Libby, R.; Libby, P.A.; Mackintosh, B. Fundamentals of Financial Accounting, 5th Canadian ed.; McGra-Hill Ryerson Limited: Whitby, ON, Canada, 2018. [Google Scholar]
  12. Uyar, A. Evolution of Corporate Reporting and Emerging Trends. J. Corp. Account. Financ. 2016, 27, 27–30. [Google Scholar] [CrossRef]
  13. Elkington, J. Cannibals with Forks: The Triple Bottom Line of 21st Century Business; Conscientious Commerce; New Society Publishers: Gabriola Island, BC, Canada; Stony Creek, CT, USA, 1999; ISBN 978-0-86571-392-5. [Google Scholar]
  14. Laine, M.; Tregidga, H.; Unerman, J. Sustainability Accounting and Accountability, 3rd ed.; Routledge: New York, NY, USA, 2021; ISBN 978-1-03-202880-4. [Google Scholar]
  15. Ng, A.; Yorke, S.; Nathwani, J. Enforcing Double Materiality in Global Sustainability Reporting for Developing Economies: Reflection on Ghana’s Oil Exploration and Mining Sectors. Sustainability 2022, 14, 9988. [Google Scholar] [CrossRef]
  16. Rawhouser, H.; Cummings, M.; Newbert, S.L. Social Impact Measurement: Current Approaches and Future Directions for Social Entrepreneurship Research. Entrep. Theory Pract. 2019, 43, 82–115. [Google Scholar] [CrossRef]
  17. Adams, C.A.; Abhayawansa, S. Connecting the COVID-19 Pandemic, Environmental, Social and Governance (ESG) Investing and Calls for ‘Harmonisation’ of Sustainability Reporting. Crit. Perspect. Account. 2022, 82, 102309. [Google Scholar] [CrossRef]
  18. Hadad, S.; Gauca, O. (Drumea) Social Impact Measurement in Social Entrepreneurial Organizations. Manag. Mark. 2014, 9, 119–136. [Google Scholar]
  19. Maas, K.; Liket, K. Social Impact Measurement: Classification of Methods. In Environmental Management Accounting and Supply Chain Management; Burritt, R., Schaltegger, S., Bennett, M., Pohjola, T., Csutora, M., Eds.; Eco-Efficiency in Industry and Science; Springer Netherlands: Dordrecht, The Netherlands, 2011; pp. 171–202. ISBN 978-94-007-1390-1. [Google Scholar]
  20. Costa, E.; Pesci, C. Social Impact Measurement: Why Do Stakeholders Matter? Sustain. Account. Manag. Policy J. 2016, 7, 99–124. [Google Scholar] [CrossRef]
  21. Kah, S.; Akenroye, T. Evaluation of Social Impact Measurement Tools and Techniques: A Systematic Review of the Literature. Soc. Enterp. J. 2020, 16, 381–402. [Google Scholar] [CrossRef]
  22. McLoughlin, J.; Kaminski, J.; Sodagar, B.; Khan, S.; Harris, R.; Arnaudo, G.; Brearty, S.M. A Strategic Approach to Social Impact Measurement of Social Enterprises: The SIMPLE Methodology. Soc. Enterp. J. 2009, 5, 154–178. [Google Scholar] [CrossRef]
  23. Gallou, E.; Fouseki, K. Applying Social Impact Assessment (SIA) Principles in Assessing Contribution of Cultural Heritage to Social Sustainability in Rural Landscapes. J. Cult. Herit. Manag. Sustain. Dev. 2019, 9, 352–375. [Google Scholar] [CrossRef]
  24. Grieco, C.; Michelini, L.; Iasevoli, G. Measuring Value Creation in Social Enterprises: A Cluster Analysis of Social Impact Assessment Models. Nonprofit Volunt. Sect. Q. 2015, 44, 1173–1193. [Google Scholar] [CrossRef]
  25. Taylor, A.; Taylor, M. Factors Influencing Effective Implementation of Performance Measurement Systems in Small and Medium-Sized Enterprises and Large Firms: A Perspective from Contingency Theory. Int. J. Prod. Res. 2014, 52, 847–866. [Google Scholar] [CrossRef]
  26. Mura, M.; Longo, M.; Micheli, P.; Bolzani, D. The Evolution of Sustainability Measurement Research. Int. J. Manag. Rev. 2018, 20, 661–695. [Google Scholar] [CrossRef]
  27. Vo, A.T.; Christie, C.A. Where Impact Measurement Meets Evaluation: Tensions, Challenges, and Opportunities. Am. J. Eval. 2018, 39, 383–388. [Google Scholar] [CrossRef]
  28. Corvo, L.; Pastore, L.; Manti, A.; Iannaci, D. Mapping Social Impact Assessment Models: A Literature Overview for a Future Research Agenda. Sustainability 2021, 13, 4750. [Google Scholar] [CrossRef]
  29. Jackson, A.; McManus, R. SROI in the Art Gallery; Valuing Social Impact. Cult. Trends 2019, 28, 2–29. [Google Scholar] [CrossRef]
  30. Lombardo, G.; Mazzocchetti, A.; Rapallo, I.; Tayser, N.; Cincotti, S. Assessment of the Economic and Social Impact Using SROI: An Application to Sport Companies. Sustainability 2019, 11, 3612. [Google Scholar] [CrossRef]
  31. Joly, P.-B.; Gaunand, A.; Colinet, L.; Larédo, P.; Lemarié, S.; Matt, M. ASIRPA: A Comprehensive Theory-Based Approach to Assessing the Societal Impacts of a Research Organization. Res. Eval. 2015, 24, 440–453. [Google Scholar] [CrossRef]
  32. Ruff, K.; Olsen, S. The Need for Analysts in Social Impact Measurement: How Evaluators Can Help. Am. J. Eval. 2018, 39, 402–407. [Google Scholar] [CrossRef]
  33. Nigri, G.; Michelini, L. A Systematic Literature Review on Social Impact Assessment: Outlining Main Dimensions and Future Research Lines. In International Dimensions of Sustainable Management: Latest Perspectives from Corporate Governance, Responsible Finance and CSR; Schmidpeter, R., Capaldi, N., Idowu, S.O., Stürenberg Herrera, A., Eds.; CSR, Sustainability, Ethics & Governance; Springer International Publishing: Cham, Switzerland, 2019; pp. 53–67. ISBN 978-3-030-04819-8. [Google Scholar]
  34. Fink, A. Conducting Research Literature Reviews: From the Internet to Paper, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2005; ISBN 978-1-4129-0904-4. [Google Scholar]
  35. Parris, D.L.; Peachey, J.W. A Systematic Literature Review of Servant Leadership Theory in Organizational Contexts. J. Bus. Ethics 2013, 113, 377–393. [Google Scholar] [CrossRef]
  36. Popay, J.; Roberts, H.; Sowden, A.; Petticrew, M.; Arai, L.; Rodgers, M.; Britten, N.; Roen, K.; Duffy, S. Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. ESRC Methods Programme 2006, 15, 47–71. [Google Scholar]
  37. Elsevier Search-How Scopus Works. Available online: http://www.elsevier.com/solutions/scopus/how-scopus-works/search (accessed on 19 April 2022).
  38. ProQuest Advanced Search-ProQuest. Available online: http://www.proquest.com/advanced?accountid=14906 (accessed on 19 April 2022).
  39. Elsevier Content-How Scopus Works. Available online: http://www.elsevier.com/solutions/scopus/how-scopus-works/content (accessed on 19 April 2022).
  40. Gusenbauer, M.; Haddaway, N. Which Academic Search Systems Are Suitable for Systematic Seviews or Meta-analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed and 26 Other Resources. Res. Synth. Methods 2020, 11, 181–217. [Google Scholar] [CrossRef]
  41. Searcy, C. Corporate Sustainability Performance Measurement Systems: A Review and Research Agenda. J. Bus. Ethics 2012, 107, 239–253. [Google Scholar] [CrossRef]
  42. Covidence Covidence—Better Systematic Review Management. Available online: https://www.covidence.org/ (accessed on 19 April 2022).
  43. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, T.P. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [PubMed]
  44. Xiao, Y.; Watson, M. Guidance on Conducting a Systematic Literature Review. J. Plan. Educ. Res. 2017, 39, 93–112. [Google Scholar] [CrossRef]
  45. Gioia, D.A.; Corley, K.G.; Hamilton, A.L. Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology. Organ. Res. Methods 2013, 16, 15–31. [Google Scholar] [CrossRef]
  46. Barraket, J.; Yousefpour, N. Evaluation and Social Impact Measurement amongst Small to Medium Social Enterprises: Process, Purpose and Value. Aust. J. Public Adm. 2013, 72, 447–458. [Google Scholar] [CrossRef]
  47. Costa, E.; Andreaus, M. Social Impact and Performance Measurement Systems in an Italian Social Enterprise: A Participatory Action Research Project. J. Public Budg. Account. Financ. Manag. 2020, 33, 289–313. [Google Scholar] [CrossRef]
  48. Liston-Heyes, C.; Liu, G. To Measure or Not to Measure? An Empirical Investigation of Social Impact Measurement in UK Social Enterprises. Public Manag. Rev. 2021, 23, 687–709. [Google Scholar] [CrossRef]
  49. Caselli, D. Did You Say “Social Impact”? Welfare Transformations, Networks of Expertise, and the Financialization of Italian Welfare. Hist. Soc. Res. Hist. Sozialforschung 2020, 45, 140–160. [Google Scholar]
  50. Phillips, S.D.; Johnson, B. Inching to Impact: The Demand Side of Social Impact Investing. J. Bus. Ethics 2021, 168, 615–629. [Google Scholar] [CrossRef]
  51. Tsotsotso, K. Is Programme Evaluation the Same as Social Impact Measurement? J. Soc. Entrep. 2021, 12, 155–174. [Google Scholar] [CrossRef]
  52. Tse, A.E.; Warner, M.E. The Razor’s Edge: Social Impact Bonds and the Financialization of Early Childhood Services. J. Urban Aff. 2020, 42, 816–832. [Google Scholar] [CrossRef]
  53. Williams, J.W. “Let’s Not Have the Perfect Be the Enemy of the Good”: Social Impact Bonds, Randomized Controlled Trials, and the Valuation of Social Programs. Sci. Technol. Hum. Values 2021, 48, 91–114. [Google Scholar] [CrossRef] [PubMed]
  54. Belcher, B.M.; Davel, R.; Claus, R. A Refined Method for Theory-Based Evaluation of the Societal Impacts of Research. MethodsX 2020, 7, 100788. [Google Scholar] [CrossRef] [PubMed]
  55. Kim Man Lee, E.; Ho, L.; Kee, C.H.; Kwan, C.H.; Ng, C.H. Social Impact Measurement in Incremental Social Innovation. J. Soc. Entrep. 2021, 12, 69–86. [Google Scholar] [CrossRef]
  56. Hughes, K.; Morgan, S.; Baylis, K.; Oduol, J.; Smith-Dumont, E.; Vågen, T.-G.; Kegode, H. Assessing the Downstream Socioeconomic Impacts of Agroforestry in Kenya. World Dev. 2020, 128, 104835. [Google Scholar] [CrossRef]
  57. Bottero, M.; Bragaglia, F.; Caruso, N.; Datola, G.; Dell’Anna, F. Experimenting Community Impact Evaluation (CIE) for Assessing Urban Regeneration Programmes: The Case Study of the Area 22@ Barcelona. Cities 2020, 99, 102464. [Google Scholar] [CrossRef]
  58. Wilson, J.; Tyedmers, P.; Grant, J. Measuring Environmental Impact at the Neighbourhood Level. J. Environ. Plan. Manag. 2013, 56, 42–60. [Google Scholar] [CrossRef]
  59. Njah, J.; Hansoti, B.; Adeyami, A.; Bruce, K.; O’Malley, G.; Gugerty, M.K.; Chi, B.H.; Lubimbi, N.; Steen, E.; Stampfly, S.; et al. Measuring for Success: Evaluating Leadership Training Programs for Sustainable Impact. Ann. Glob. Health 2021, 87, 63. [Google Scholar] [CrossRef]
  60. Ricciuti, E.; Bufali, M.V. The Health and Social Impact of Blood Donors Associations: A Social Return on Investment (SROI) Analysis. Eval. Program Plan. 2019, 73, 204–213. [Google Scholar] [CrossRef]
  61. Lee, S.; Cornwell, T.; Babiak, K. Developing an Instrument to Measure the Social Impact of Sport: Social Capital, Collective Identities, Health Literacy, Well-Being and Human Capital. J. Sport Manag. 2013, 27, 24–42. [Google Scholar] [CrossRef]
  62. Kanter, R.M.; Brinkerhoff, D. Organizational Performance: Recent Developments in Measurement. Annu. Rev. Sociol. 1981, 7, 321–349. [Google Scholar] [CrossRef]
  63. Burdge, R.J.; Vanclay, F. Social Impact Assessment: A Contribution to the State of the Art Series. Impact Assess. 1996, 14, 59–86. [Google Scholar] [CrossRef]
  64. Dufour, B. Social Impact Measurement: What Can Impact Investment Practices and the Policy Evaluation Paradigm Learn from Each Other? Res. Int. Bus. Financ. 2019, 47, 18–30. [Google Scholar] [CrossRef]
  65. Pawluczuk, A.; Hall, H.; Webster, G.; Smith, C. Youth Digital Participation: Measuring Social Impact. J. Librariansh. Inf. Sci. 2020, 52, 3–15. [Google Scholar] [CrossRef]
  66. Clark, C.; Rosenzweig, W.; Long, D.; Olsen, S. Double Bottom Line Project Report: Assessing Social Impact in Double Bottom Line Ventures; University of California: Berkeley, CA, USA, 2004; p. 70. [Google Scholar]
  67. Cerioni, E.; Marasca, S. The Methods of Social Impact Assessment: The State of the Art and Limits of Application. Calitatea 2021, 22, 29–36. [Google Scholar]
  68. Ebrahim, A.; Rangan, V. The Limits of Nonprofit Impact: A Contingency Framework for Measuring Social Performance. Harv. Bus. Sch. Harv. Bus. Sch. Work. Pap. 2010, 1–53. [Google Scholar] [CrossRef]
  69. World Commission on Environment and Development. Our Common Future; Oxford Paperbacks; Oxford University Press: Oxford, UK; New York, NY, USA, 1987; ISBN 978-0-19-282080-8. [Google Scholar]
  70. Ferraro, P.J.; Hanauer, M.M. Advances in Measuring the Environmental and Social Impacts of Environmental Programs. Annu. Rev. Environ. Resour. 2014, 39, 495–517. [Google Scholar] [CrossRef]
  71. Jones, A.; Valero-Silva, N. Social Impact Measurement in Social Housing: A Theory-Based Investigation into the Context, Mechanisms and Outcomes of Implementation. Qual. Res. Account. Manag. 2021, 18, 361–389. [Google Scholar] [CrossRef]
  72. Stiglitz, J.; Sen, A.; Fitoussi, J. Report of the Commission on the Measurement of Economic Performance and Social Progress (CMEPSP); Government of France: Paris, France, 2009. [Google Scholar]
  73. Armstrong, A.G.; Mattson, C.A.; Lewis, R.S. Factors Leading to Sustainable Social Impact on the Affected Communities of Engineering Service Learning Projects. Dev. Eng. 2021, 6, 100066. [Google Scholar] [CrossRef]
  74. Meringolo, P.; Volpi, C.; Chiodini, M. Community Impact Evaluation. Telling a Stronger Story. Community Psychol. Glob. Perspect. 2019, 5, 85–106. [Google Scholar] [CrossRef]
  75. Social Value UK SROI Guide-Guidance, Standards and Framework. Available online: https://socialvalueuk.org/resources/sroi-guide-impact-map-worked-example/ (accessed on 31 May 2022).
  76. Global Impact Investing Network (GIIN) Welcome to IRIS+ System | the Generally Accepted System for Impact Investors to Measure, Manage, and Optimize Their Impact. Available online: https://iris.thegiin.org/ (accessed on 31 May 2022).
  77. Global Reporting Initiative (GRI) GRI–Home. Available online: https://www.globalreporting.org/ (accessed on 31 May 2022).
  78. Dey, C.; Gibbon, J. Moving on from Scaling up: Further Progress in Developing Social Impact Measurement in the Third Sector. Soc. Environ. Account. J. 2017, 37, 66–72. [Google Scholar] [CrossRef]
  79. Mulloth, B.; Rumi, S. Challenges to Measuring Social Value Creation through Social Impact Assessments: The Case of RVA Works. J. Small Bus. Enterp. Dev. 2021, 29, 528–549. [Google Scholar] [CrossRef]
  80. Nicholls, J.; Lawlor, E.; Neitzert, E.; Goodspeed, T. A Guide to Social Return on Investment; Cabinet Office: London, UK, 2012. [Google Scholar]
  81. Lazzarini, S.G. The Measurement of Social Impact and Opportunities for Research in Business Administration. RAUSP Manag. J. 2018, 53, 134–137. [Google Scholar] [CrossRef]
  82. Calvo-Mora, A.; Domínguez-CC, M.; Criado, F. Assessment and Improvement of Organisational Social Impact through the EFQM Excellence Model. Total Qual. Manag. Bus. Excell. 2018, 29, 1259–1278. [Google Scholar] [CrossRef]
  83. Solórzano-García, M.; Navío-Marco, J.; Ruiz-Gómez, L.M. Ambiguity in the Attribution of Social Impact: A Study of the Difficulties of Calculating Filter Coefficients in the SROI Method. Sustainability 2019, 11, 386. [Google Scholar] [CrossRef]
  84. O’Byrne, D.; Mechiche-Alami, A.; Tengberg, A.; Olsson, L. The Social Impacts of Sustainable Land Management in Great Green Wall Countries: An Evaluative Framework Based on the Capability Approach. Land 2022, 11, 352. [Google Scholar] [CrossRef]
  85. Barnett, M.L.; Henriques, I.; Husted, B.W. Beyond Good Intentions: Designing CSR Initiatives for Greater Social Impact. J. Manag. 2020, 46, 937–964. [Google Scholar] [CrossRef]
  86. Harrington, R.; Good, T.; O’Neil, K.; Grant, S.; Maass, S.; Vettern, R.; McGlaughlin, P. Value of Assessing Personal, Organizational, and Community Impacts of Extension Volunteer Programs. J. Ext. 2021, 59, 6. [Google Scholar] [CrossRef]
  87. Mackenzie, S.G.; Davies, A.R. SHARE IT: Co-Designing a Sustainability Impact Assessment Framework for Urban Food Sharing Initiatives. Environ. Impact Assess. Rev. 2019, 79, 106300. [Google Scholar] [CrossRef]
  88. Pawluczuk, A.; Webster, G.; Smith, C.; Hall, H. The Social Impact of Digital Youth Work: What Are We Looking For? Media Commun. 2019, 7, 59–68. [Google Scholar] [CrossRef]
  89. Polonsky, M.J.; Landreth Grau, S.; McDonald, S. Perspectives on Social Impact Measurement and Non-Profit Organisations. Mark. Intell. Plan. 2016, 34, 80–98. [Google Scholar] [CrossRef]
  90. Caló, F.; Teasdale, S.; Donaldson, C.; Roy, M.J.; Baglioni, S. Collaborator or Competitor: Assessing the Evidence Supporting the Role of Social Enterprise in Health and Social Care. Public Manag. Rev. 2018, 20, 1790–1814. [Google Scholar] [CrossRef]
  91. Clarke, A.; Crane, A. Cross-Sector Partnerships for Systemic Change: Systematized Literature Review and Agenda for Further Research. J. Bus. Ethics 2018, 150, 303–313. [Google Scholar] [CrossRef]
  92. Roy, M.J.; Donaldson, C.; Baker, R.; Kerr, S. The Potential of Social Enterprise to Enhance Health and Well-Being: A Model and Systematic Review. Soc. Sci. Med. 2014, 123, 182–193. [Google Scholar] [CrossRef] [PubMed]
  93. Nicholls, A. A General Theory of Social Impact Accounting: Materiality, Uncertainty and Empowerment. J. Soc. Entrep. 2018, 9, 132–153. [Google Scholar] [CrossRef]
  94. Cho, C.H.; Bohr, K.; Choi, T.J.; Partridge, K.; Shah, J.M.; Swierszcz, A. Advancing Sustainability Reporting in Canada: 2019 Report on Progress. Account. Perspect. 2020, 19, 181–204. [Google Scholar] [CrossRef]
  95. Marlowe, J.; Clarke, A. Carbon Accounting: A Systematic Literature Review and Directions for Future Research. Green Financ. 2022, 4, 71–87. [Google Scholar] [CrossRef]
  96. Bebbington, J.; Unerman, J. Achieving the United Nations Sustainable Development Goals: An Enabling Role for Accounting Research. Account. Audit. Account. 2018, 31, 2–24. [Google Scholar] [CrossRef]
  97. Caló, F.; Roy, M.J.; Donaldson, C.; Teasdale, S.; Baglioni, S. Evidencing the Contribution of Social Enterprise to Health and Social Care: Approaches and Considerations. Soc. Enterp. J. 2021, 17, 140–155. [Google Scholar] [CrossRef]
Figure 1. Article selection process using the PRISMA model adapted from Moher et al. [43].
Figure 1. Article selection process using the PRISMA model adapted from Moher et al. [43].
World 04 00051 g001
Figure 2. Number of articles by year.
Figure 2. Number of articles by year.
World 04 00051 g002
Figure 3. Number of publications by study area.
Figure 3. Number of publications by study area.
World 04 00051 g003
Table 1. Measurement model descriptions.
Table 1. Measurement model descriptions.
Measurement ModelData Collection MethodMeasurement ToolsSectorApplication ScaleGrieco et al. [24]
Cluster
Corvo et al. [28]
Cluster
Mentioned in # of
Articles
Social return on investment (SROI) [75] Interviews, questionnairesOutcome indicators are developed by the assessor and informed by stakeholdersMultiple sectorsProject, program, organizationSimple Social QuantitativeMonetization24
Impact Reporting and Investment Standards (IRIS+) [76] Not specifiedA catalog of output and outcome metrics are provided by IRIS+Impact investing focus made available to multiple sectorsOrganizationHolistic ComplexPerformance/management and sustainability7
Global Reporting Initiative (GRI) [77] Not specifiedDisclosures are provided by GRI. Output indicators are included within some of the disclosures. Multiple sectorsOrganizationHolistic ComplexPerformance/management and sustainability6
Social accounting and audit (SAA) [18] Interviews, questionnaires, focus groupsOutput and outcome indicators are developed by the organization and informed by stakeholdersMultiple sectorsProject, program, organizationHolistic ComplexAuditing/quality5
Table 2. Measurement drivers and benefits.
Table 2. Measurement drivers and benefits.
DriversSIM Supports the Practitioner in The Following
Internal: Improve organizational performance/continuous improvementAchieving the sustainable development goals (SDGs) [4] and other social impact objectives [83].
Improving understanding of how to allocate resources [24].
Developing internal learning tools [8] that can inform future projects [84] and support decision making [6].
Identifying upscaling opportunities and improve cost-effectiveness of inputs and activities [3].
Contributing to an organization’s strategy [67].
Increasing staff morale [1].
External: Demand from funders and investors Supporting policymakers seeking to invest in social impact [49].
Promoting evidence-based reporting to demonstrate the value created for communities [8].
Assessing sustainability performance to invest purposefully [9].
Generating social impact data for grant applications and providing evidence of impact [1], including to meet compliance demands [46].
Having competitive advantage when seeking contracts [1] and funding [20].
Demonstrating the impact of corporate social responsibility (CSR) initiatives funded by an organization [85].
External: Pressure from other stakeholdersSupporting decision making [4] with relevant data [86].
Challenging traditional economic thinking and political decision making [9].
Legitimizing activities by gaining insights into organizational performance [3].
Demonstrating the societal impact of activities [7].
Demonstrating the results of strategic efforts [1].
Increasing transparency, accountability, and legitimacy [8,21].
Enhancing promotional efforts [1].
Complying with legislation [67].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feor, L.; Clarke, A.; Dougherty, I. Social Impact Measurement: A Systematic Literature Review and Future Research Directions. World 2023, 4, 816-837. https://doi.org/10.3390/world4040051

AMA Style

Feor L, Clarke A, Dougherty I. Social Impact Measurement: A Systematic Literature Review and Future Research Directions. World. 2023; 4(4):816-837. https://doi.org/10.3390/world4040051

Chicago/Turabian Style

Feor, Leah, Amelia Clarke, and Ilona Dougherty. 2023. "Social Impact Measurement: A Systematic Literature Review and Future Research Directions" World 4, no. 4: 816-837. https://doi.org/10.3390/world4040051

Article Metrics

Back to TopTop