Next Article in Journal
Factors Explaining Households’ Cash Payment for Solid Waste Disposal and Recycling Behaviors in South Africa
Previous Article in Journal
Quantitative Assessment of the Human Appropriation of Net Primary Production (HANPP) in the Coastal Areas of Jiangsu, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Challenges to Building an Evidence Base for Restoration Practice

by
Phumza Ntshotsho
1,*,
Karen J. Esler
2,3,† and
Belinda Reyers
1,2,†
1
Natural Resources and the Environment, CSIR, P.O. Box 320, Stellenbosch 7599, South Africa
2
Department of Conservation Ecology & Entomology, Stellenbosch University, Private Bag x1, Matieland 7602, South Africa
3
Centre for Invasion Biology, Stellenbosch University, Private Bag x1, Matieland 7602, South Africa
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sustainability 2015, 7(12), 15871-15881; https://doi.org/10.3390/su71215788
Submission received: 22 September 2015 / Revised: 18 November 2015 / Accepted: 19 November 2015 / Published: 30 November 2015
(This article belongs to the Section Environmental Sustainability and Applications)

Abstract

:
Global acknowledgement of ecological restoration, as an important tool to complement conservation efforts, requires an effort to increase the effectiveness of restoration interventions. Evidence-based practice is purported to promote effectiveness. A central tenet of this approach is decision making that is based on evidence, not intuition. Evidence can be generated experimentally and in practice but needs to be linked to baseline information collection, clear goals and monitoring of impact. In this paper, we report on a survey conducted to assess practitioners’ perceptions of the evidence generated in restoration practice in South Africa, as well as challenges encountered in building this evidence base. Contrary to a recent assessment of this evidence base which found weaknesses, respondents viewed it as adequate and cited few obstacles to its development. Obstacles cited were mostly associated with planning and resource availability. We suggest that the disparity between practitioners’ perceptions and observed weaknesses in the evidence base could be a challenge in advancing evidence-based restoration. We explore opportunities to overcome this disparity as well as the obstacles listed by practitioners. These opportunities involve a shift from practitioners as users of scientific knowledge and evidence, to practitioners involved in the co-production of evidence needed to increase the effectiveness of restoration interventions.

1. Introduction

Widespread human-induced ecosystem degradation and associated biodiversity loss pose a direct threat to biodiversity conservation and human wellbeing. Ecological restoration, defined as the process of assisting the recovery of an ecosystem that has been degraded, damaged or destroyed [1], is now recognized as a priority in global efforts to mitigate this threat [2]. Restoration offers an effective, and perhaps inevitable, supplement where conservation alone, as traditionally practiced, is not sufficient to support ecosystem integrity [3]. Decision makers in conservation and restoration are often faced with the challenge of having to choose the right intervention to implement from among a suite of options; the right intervention being one where the decision maker has a high degree of certainty of its effectiveness. Effective conservation and restoration require knowledge of the effectiveness and impacts of specific actions, often based on evidence of how successful a given action has been in achieving objectives previously [4]. Widespread recognition of this need has resulted in a shift from experience-based approaches [5,6] with evidence-based practice emerging as a popular approach in the fields of conservation and restoration [7,8,9]. Evidence-based practice is the process of systematically finding, appraising, and using evidence of the effectiveness of interventions to inform decision making [10]. This approach enables a decision maker to make an informed choice from an array of interventions based on information on impacts of specific interventions under particular circumstances.
Adopting an evidence-based approach makes sense because it facilitates learning from both failures and successes [11]. In addition, this approach seeks to address the question “Do we have the data to show we are making a difference rather than simply assuming we are doing some good?” [9]. Answering this question provides credibility, which in turn is essential to ensure public and financial support for conservation activities. As an activity with high implementation costs [12], ecological restoration could do with such support. Working with a limited budget, decision makers need compelling evidence to allocate and sustain adequate funding for restoration activities. This is particularly relevant in restoration where a wide range of interventions exist. For example, a manager of an invasive alien plant management program could choose to use mechanical or chemical means, or even a combination of treatments [13]. The best decision would be one that takes into account all existing evidence of effectiveness of alternative strategies. Such evidence could be generated experimentally or could come from monitoring practical applications of the strategies in question [2,14].
Generation of evidence in practice requires that managers and implementers of restoration projects set goals for the desired state, collect baseline information on the current state and monitor progress towards achievement of the desired state. In a previous assessment of the evidence base of South African restoration projects and programs [15], gaps were highlighted: poor goal definition and baseline information collection. Additionally, a bias towards the monitoring of restoration actions rather than impacts was found, together with inconsistent monitoring of ecological indicators. These challenges are common in conservation and restoration [4,6,16,17] and have been attributed to impediments such as imperfect knowledge, resource constraints and short project time spans [18,19,20,21,22,23]. Most of these obstacles have been reported by scientists working in the field of monitoring and evaluation rather than practitioners actually implementing the projects. Scientists and practitioners often operate in different environments with different goals, perspectives and motivations [24,25,26]. As such, obstacles cited by scientists may not necessarily reflect those encountered by practitioners [27]. Because practitioners play such a critical role in the development and improvement of the field of restoration, it is essential that their experiences are also documented.
We propose that an improved understanding of the obstacles encountered by practitioners and managers in the actual generation of evidence will help to advance evidence-based restoration. This study thus aims to explore practitioners’ and managers’ perceptions of the evidence base and the obstacles encountered in the actual implementation of restoration and collection of evidence. An understanding of these challenges is likely to help with the formulation of strategies and methods to overcome them. In this article, we report on the results of a survey of practitioners and managers involved in terrestrial restoration projects in South Africa. Projects included both assisted regeneration and active restoration, and represented a mix of ecosystem types and restoration approaches [15].

2. Materials and Methods

2.1. Questionnaire Design and Administration

We used an online questionnaire (administered by SurveyMonkey online survey software at http://www.surveymonkey.com) to gather information for an earlier study [15] and for this one (Supplementary material). The three sections of the questionnaire relevant to this study focused on role player perceptions and obstacles around baseline information collection, goal setting and monitoring (i.e., Section 3, Section 4 and Section 5 of the questionnaire). We selected baseline measures, monitored indicators and goals, and tested them in a pilot study to ensure that our lists were comprehensive and did not leave out any elements that were applicable in practice. We subsequently classified them as either socio-economic (e.g., jobs created, poverty alleviation) or biophysical (e.g., species richness, water flows). We then gave respondents lists of these baseline measures, monitored indicators and goals (Table 1) and asked them to:
  • rate the adequacy of collection of baseline information in their projects (we use the dictionary meaning [28] of the word adequate as meaning fit for the respondent’s particular purposes or needs);
  • identify obstacles to adequate baseline information collection;
  • comment on what could be done to increase the adequacy of baseline information collection;
  • identify goals applicable to their projects as well as obstacles to the documentation and quantification of goals;
  • comment on the setting of goals of restoration;
  • identify obstacles to monitoring and comment on what is needed to increase the incidence of long-term monitoring.
Furthermore, respondents were asked to name their primary role in their chosen projects.
The questionnaire was piloted among 11 respondents, and completed by seven. We sent the final questionnaire by e-mail to a total of 85 potential respondents involved in restoration activities in South Africa. Follow-up reminders were sent at three week intervals for two months.
Table 1. Lists of baseline measures, goals and monitored indicators used in the survey.
Table 1. Lists of baseline measures, goals and monitored indicators used in the survey.
Baseline MeasureGoalMonitored Indicator
Environmental awareness levelsJob creationPerson hours worked
Unemployment ratePoverty alleviationTraining provided
People living in povertyLivelihood improvementAwareness campaigns held
Household incomeDevelopment of a market for PESNumber of jobs created
LiteracyCapacity buildingLivelihood impacts
Plant species compositionEnvironmental awareness creationEnvironmental awareness levels
Density/cover of indigenous speciesAlien plant controlArea cleared of invasives
Water qualityWater resource improvementFences erected
Water quantityBiodiversity conservationSolid structures built
Aquatic diversitySoil conservationArea revegetated
Above-ground carbon stocksCarbon sequestrationWater quantity
Soil chemical qualityEcosystem productivity improvementWater quality
GeomorphologyRestoring natural capitalSpecies richness
Erosion/bare patchesIncreasing resilienceSoil erodability
Levels of degradationOther (please specify)Carbon sequestered
Other (please specify) Plant survival/growth
Biomass accumulation
Other (please specify)

2.2. Sample Selection

We primarily used purposive sampling, a non-probability sampling technique whereby units are selected based on their having similar characteristics that are of particular interest to the researcher/s [29]. We thus chose the sample on the basis of our knowledge of the restoration community in South Africa. In using this method, we approached individuals involved in the projects that were evaluated in the earlier study [15]. In addition, we used snowball sampling [29] to make the sample more representative. We asked the initial respondents to suggest other potential correspondents, either within or beyond their projects.

2.3. Data Analysis

Data were coded and analyzed. Because our aim was to explore the responses of practitioners and managers, we excluded responses from people who classified themselves as researchers from the analyses. In terms of respondents’ perceptions, we assumed a normal population distribution and hypothesized that (i) half the respondents would rate the collection of baseline information as adequate while the other half would not, and (ii) half the respondents would cite obstacles to baseline information collection, goal setting and monitoring while the other would not (i.e., Ho; p = 0.5). A Z test was thus used to determine whether observed response rates were significantly different from those expected from chance alone.

3. Results

3.1. Response Rates and Roles of Respondents in Restoration

Forty-four out of 85 people completed the survey (a response rate suitable for analysis and reporting; [29]). Almost half of these respondents were coordinators and/or facilitators, a quarter of them were researchers (who we subsequently left out of the analysis), 16% were practitioners, while 11% classified themselves as having other roles. Henceforth, the term “respondents” will refer to the 33 non-researcher respondents.

3.2. Baseline Information Collection, Goal Setting and Monitoring

For nine of the 17 baseline measures, significantly more than half of the respondents who had identified them as applicable to their projects, rated their use as adequate to very adequate (Figure 1). Only the baseline measure “belowground carbon stocks” was rated as adequate to very adequate by significantly less than 50% of the respondents in whose projects it was applicable.
Figure 1. Percentage of respondents who indicated that in their respective projects baseline measures were used (solid black bars) and that information collected on them was adequate to very adequate (grey bars). Stars denote that the proportion of respondents who rated the collection of information for a particular baseline measure as adequate to very adequate was significantly different from an expected 0.5. The bracket denotes socio-economic measures.
Figure 1. Percentage of respondents who indicated that in their respective projects baseline measures were used (solid black bars) and that information collected on them was adequate to very adequate (grey bars). Stars denote that the proportion of respondents who rated the collection of information for a particular baseline measure as adequate to very adequate was significantly different from an expected 0.5. The bracket denotes socio-economic measures.
Sustainability 07 15788 g001
Of the 14 goals, only four were said to be stated in writing by 70% or more of the respondents (Figure 2). Even lower percentages (less than 62% in all cases) of respondents indicated that the goals applicable to their projects were quantified (Figure 2). For example, 82% of the respondents said job creation was an applicable goal in their projects, but 70% and 61%, respectively, said this goal was stated in writing and quantified.
When assessing indicators used in monitoring of the restoration programs, we also found a set of 17 indicators (while there were similarities in the names of indicators used for baseline information collection and monitoring, we emphasize that these were two independent sets, which happened to each have 17 indicators). We distinguished between indicators of implementation (marked with a solid line bracket on the Figure 3) and indicators of outcome/impact [30]. Consistent monitoring (defined here as monitoring at all project sites) was relatively more common than long-term monitoring (monitoring beyond the implementation phase), especially in the case of socio-economic indicators and indicators of implementation (Figure 3).
Figure 2. Percentage of respondents who identified different restoration goals as applicable to their projects (solid black bars); stated in writing (grey bars) and quantified (clear bars). The bracket denotes socio-economic goals.
Figure 2. Percentage of respondents who identified different restoration goals as applicable to their projects (solid black bars); stated in writing (grey bars) and quantified (clear bars). The bracket denotes socio-economic goals.
Sustainability 07 15788 g002
Figure 3. Percentage of respondents who indicated that the 17 indicators shown were monitored (black bars) consistently (grey bars) and in the long term (clear bars). Indicators underneath the solid line bracket are based on implementation (and not outcomes and impact) and those underneath the dashed bracket are of a socio-economic nature.
Figure 3. Percentage of respondents who indicated that the 17 indicators shown were monitored (black bars) consistently (grey bars) and in the long term (clear bars). Indicators underneath the solid line bracket are based on implementation (and not outcomes and impact) and those underneath the dashed bracket are of a socio-economic nature.
Sustainability 07 15788 g003
The three most consistently monitored indicators (cited by over 50% of the respondents) were those associated with employment creation and capacity building. Long-term monitoring was limited with 30% or less of the respondents indicating that any of the indicators were monitored beyond the implementation phase. For all the socio-economic indicators, less than 10% of the respondents indicated that they were used in long-term monitoring. Overall, the percentage of respondents identifying any indicator as applicable to their project was always higher than the percentage who indicated that such indicator was monitored consistently and in the long term. The degree of attrition, however, was higher for long term monitoring than for consistent monitoring in all but three indicators (viz: plant survival, water quality and water quantity).

3.3. Perceived Obstacles

The proportion of respondents (0.42) who did not identify any obstacles to the three components of evidence generation was not significantly different from 0.50, i.e., from chance alone (Z = –1.02, p ≤ 0.05). However, when we disaggregated the three components it became evident that significantly more than half of the respondents (84%) did not perceive any obstacles to the collection of baseline information and monitoring (Z = 6.45, p ≤ 0.05 in both cases), while about 52% did not perceive any obstacles to goal setting (Z = 0.20, p ≤ 0.05). This perceived lack of obstacles to the collection of baseline information and monitoring was also reflected in the numbers of obstacles identified in association with these variables, four and three, respectively (Table 2).
Table 2. Types of obstacles associated with baseline measures, goal setting and monitoring, as identified by respondents.
Table 2. Types of obstacles associated with baseline measures, goal setting and monitoring, as identified by respondents.
VariableObstacle% Identifying Obstacle
Baseline measuresNot part of ToR9
Lack of funds6
Lack of time6
Lack of expertise3
GoalsNot all goals can be quantified36
Stakeholders are vague about what they want15
Resource constraints12
Not necessary to specify and quantify goals12
Goal not part of Terms of Reference6
Goals change all the time3
MonitoringLack of funds12
Capacity constraints12
Lack of knowledge9
The major obstacle associated with baseline measures was their absence from the terms of reference. Though few respondents perceived obstacles to monitoring, when asked what could be done to increase the incidence of long-term monitoring, 21% stated that provisions for monitoring should be entrenched in the decision-making process. Another 24% mentioned that adequate provision of funds would assist, which is double the percentage of respondents who cited lack of funds as an obstacle. In reviewing obstacles associated with goal setting six obstacles were identified, including lack of resources and not being included in the terms of reference. Almost 40% of the respondents asserted that “not all goals can be quantified” and this was the single most cited obstacle to goal setting. Despite this, when asked to agree or disagree with the statement that for ease of measurement it is preferable to set quantitative goals instead of qualitative ones, 88% of the respondents agreed.

4. Discussion

4.1. Strength and Perception of the Evidence Base

This study shows that practitioners involved in several restoration programs in South Africa perceive the evidence base as satisfactory. However, an earlier study of these same restoration programs found weaknesses across goal setting, baseline indicators and monitoring [15] similar to those highlighted by other studies in the fields of restoration and conservation [18,19,20,22,23,30]. These parallels are to be expected considering that the generally poor development of ecological indicators [31] is likely to hamper monitoring of both baseline condition as well as post-intervention status, be it in a conservation or restoration initiative. Similarly, the problem of unquantified goals has been a challenge for years [32,33,34]. These shortcomings, however, were not reflected in the responses of the practitioners we surveyed, who perceived the evidence base to be largely adequate. Admittedly, the word adequate is subjective, but this subjectivity sheds some light on a possible misalignment between scientists’ and practitioners’ views of the need for a strong evidence base. It could be that practitioners perceive the evidence as adequate because they do not actually use it in decision-making [5,35,36]. Indeed, the evidence base could be adequate for practitioner requirements, which are unknown to us, while it does not fulfill the requirements for use in evidence-based practice. A question thus arises as to the motivations for and perceptions of the need for evidence collection, which represents an opportunity for future research. Such research could investigate whether practitioners are willing to use an evidence-based approach.
The incongruence between our perceptions of the evidence base and those of the respondents could also be superficial, being a reflection of social desirability bias arising from how the questions were phrased [37]. Having surveyed people intimately involved in these restoration projects, it is possible that the likelihood of them perceiving their own indicators as inadequate is rather low. Similarly, Bernhardt et al. [33] and Rumps et al. [38] observed that the majority of project contacts in stream and river restoration projects in the USA perceived their projects as having been very successful, despite less than half of the projects in either study having measurable success criteria. It would thus seem that being invested in a project (as respondents in all these studies, including ours, obviously are) is an obvious source of potential bias when the performance of said project is in question. Although the case for the adoption of an evidence-based approach towards restoration has been made in scientific literature [11,16,22], if practitioners perceive the current status of the evidence base as acceptable, then there is little motivation for improvement, and this could be a challenge in mainstreaming evidence-based practice.

4.2. Obstacles to Building the Evidence Base

While few respondents identified obstacles, the types of obstacles identified are useful for exploring options for evidence-based restoration. Resource constraints, which included funds, time and human resources, were the dominant category of obstacles to baseline information collection and monitoring. Although only 12% of respondents cited a lack of funds as an obstacle to monitoring, cost is a central issue in long-term monitoring [18,33]. Short-term grants, which translate into short project time-lines, may therefore be partially responsible for the low incidence of long-term and consistent monitoring of impact. Because some outcomes and impacts of restoration may only become apparent years after implementation, it may be considered worthless to monitor them in projects with short life-spans. It has been suggested that funding commitments should be sufficiently long-term (a minimum of ten years) in order to allow detection of ecological change [21]. We also suggest that voluntary participatory monitoring, which has been shown to yield reliable data [39,40], be adopted to perpetuate impact monitoring beyond the active phase of projects. Implementing these two suggestions would address all three resource constraints (i.e., funds, time and workforce), which are not unique to restoration in South Africa but rather common in many forms of conservation globally [4,6,16,17].
Other obstacles identified related to planning and management of restoration projects. For example, terms of reference (ToR) is a document meant to define objectives, scope, resources allocated, roles and responsibilities pertaining to a specific project [41]. Because the ToR acts as a guiding document for project implementation, omission of any important restoration aspect from it (e.g., some baseline information requirements, goals, etc.) should be guarded against. This then highlights the importance of collaborative planning of restoration projects, where funders and managers/practitioners work as partners in the development of the ToR in order to agree on what is feasible and what resources are needed to improve effectiveness.
Clear articulation and documentation of goals and objectives is essential for success and the ability to assess progress towards its achievement [3,22,42]. One way of ensuring that goals are clear, realistic and achievable is to state them quantitatively. However, while quantification of goals is sometimes necessary for ease of evaluation, we are also aware that not everything that counts can be counted (quantified) and not everything that can be counted counts, a sentiment that was reiterated by many respondents. As such, we are not advocating for the discarding of qualitative goals. Rather, we recommend a hierarchical approach to goal-setting [32], with overarching goals stated and supported by several measurable targets. While broad goals are good for providing a view of the envisioned outcome of restoration, it is the quantifiable objectives that are crucial in the monitoring of progress towards success. The importance of measurable targets has long been emphasized [42,43].

4.3. Overcoming Obstacles

Aronson and Alexander [2] have made a call to restoration practitioners and scientists to take up the challenge of engaging in effective restoration work. We concur with them, and further add that it is not enough for practitioners to continue looking to science for guidance on how to do this. To this end, an appreciation of the role of practitioners in knowledge co-production is needed. Because practitioners are generally not regarded as knowledge producers, there needs to be a radical mind shift, where practitioners themselves come to recognize their “new” role and step up to the challenge. As part of this knowledge-production drive, practitioners would need to apply scientific rigor similar to that which goes into typical experimental and research projects. This could be facilitated through strong partnerships between the research and practice communities to expedite the exchange of skills [33,44,45]. Such an undertaking, where non-scientists become active participants in scientific data gathering would not be anything new. For example, citizen science (also known as public participation in science), has been used for decades to help researchers collect data on longer timescales and larger geographic scales than scientists and their students and field technicians could cover [46]. Admittedly, data reliability and credibility may be an issue in this approach to knowledge generation. This, however, can be easily addressed through the development of data collection protocols and the vetting of collected data by experts.

5. Conclusions

Evidence-based practice is founded on the principle of using evidence of effectiveness of alternative strategies when making implementation decisions. In the face of the persistent research-implementation gap in conservation and restoration [24,47,48], it is important to realize the potential of restoration practice itself to generate useful evidence. This is similar to what active adaptive management and related approaches call for [49,50,51]. In addition to the active pursuit of re-iterative learning within the project, which is an inherent element of active adaptive management, we want to emphasize the need for disseminating evidence to the wider restoration community. This is an important pillar of evidence-based practice. Support of the idea of generating and distributing robust evidence by practitioners would change the way we currently view evidence-based practice (i.e., as a one-way transfer of information, from research to practice, to promote the effectiveness of interventions). The resource constraints and management challenges identified by respondents in this study are not insurmountable. A greater challenge is the need for a fundamental shift in mindsets, where practitioners will identify themselves, and be acknowledged, as co-producers of knowledge, rather than just recipients and users. We thus join other scholars [9,52,53] in the call for this shift.

Supplementary Files

Supplementary File 1

Acknowledgments

We sincerely thank the respondents who participated in the survey. We are also grateful to anonymous reviewers who commented on an earlier draft of the manuscript. The financial contribution of the National Research Foundation (NRF) and the Council for Scientific and Industrial Research (CSIR) to this research is acknowledged. This study was carried out with the approval of the Stellenbosch University Ethics Committee (Reference number 303/2010).

Author Contributions

Phumza Ntshotsho, Karen J. Esler, and Belinda Reyers conceptualized and designed the study. Phumza Ntshotsho administered the questionnaire, analyzed the data and wrote the manuscript, under the supervision of Karen J. Esler and Belinda Reyers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Society for Ecological Restoration (SER) International Science and Policy Working Group. The SER International Primer on Ecological Restoration; Society for Ecological Restoration International: Washington, DC, USA, 2004; Available online: http://www.ser.org/resources/resources-detail-view/ser-international-primer-on-ecological-restoration (accessed on 5 August 2012).
  2. Aronson, J.; Alexander, S. Ecosystem restoration is now a global priority: Time to roll up our sleeves. Restor. Ecol. 2013, 21, 293–296. [Google Scholar] [CrossRef]
  3. Hobbs, R.J.; Harris, J.A. Restoration ecology: Repairing the earth’s ecosystems in the new millennium. Restor. Ecol. 2001, 9, 239–246. [Google Scholar] [CrossRef]
  4. Kapos, V.; Balmford, A.; Aveling, R.; Bubb, P.; Carey, P.; Entwistle, A.; Hopkins, J.; Mulliken, T.; Safford, R.; Stattersfield, A.; et al. Outcomes, not implementation predict conservation success. Oryx 2009, 43, 336–342. [Google Scholar] [CrossRef]
  5. Cook, C.N.; Hockings, M.; Carter, R.W. Conservation in the dark? The information used to support management decisions. Front. Ecol. Environ. 2009, 8, 181–186. [Google Scholar] [CrossRef] [Green Version]
  6. Ferraro, P.J.; Pattanayak, S.K. Money for nothing? A call for empirical evaluation of biodiversity conservation investments. PLoS Biol. 2006, 4, 482–488. [Google Scholar] [CrossRef] [PubMed]
  7. Pullin, A.S.; Stewart, G.B. Guidelines for systematic review in conservation and environmental management. Conserv. Biol. 2006, 20, 1647–1656. [Google Scholar] [CrossRef] [PubMed]
  8. Roberts, P.D.; Stewart, G.B.; Pullin, A.S. Are review articles a reliable source of evidence to support conservation and environmental management? A comparison with medicine. Biol. Conserv. 2006, 132, 409–423. [Google Scholar] [CrossRef]
  9. Pullin, A.S.; Knight, T.M. Doing more good than harm—Building an evidence-base for conservation and environmental management. Biol. Conserv. 2009, 142, 931–934. [Google Scholar] [CrossRef]
  10. Sutherland, W.J.; Pullin, A.S.; Dolman, P.M.; Knight, T.M. The need for evidence-based conservation. Trends Ecol. Evol. 2004, 19, 305–308. [Google Scholar] [CrossRef] [PubMed]
  11. Hobbs, R. Looking for the silver lining: Making the most of failure. Restor. Ecol. 2009, 17, 1–3. [Google Scholar]
  12. Holl, K.D.; Howarth, R.B. Paying for restoration. Restor. Ecol. 2000, 8, 260–267. [Google Scholar] [CrossRef]
  13. Holmes, P.; Esler, K.J.; Richardson, D.; Witkowski, E. Guidelines for improved management of riparian zones invaded by alien plants in South Africa. S. Afr. J. Bot. 2008, 74, 538–552. [Google Scholar] [CrossRef]
  14. Fule, P.Z.; Covington, W.W.; Smith, H.B.; Springer, J.D.; Heinlein, T.A.; Huisinga, K.D.; Moore, M.M. Comparing ecological restoration alternatives: Grand Canyon, Arizona. For. Ecol. Manag. 2002, 170, 19–41. [Google Scholar] [CrossRef]
  15. Ntshotsho, P.; Reyers, B.; Esler, K.J. Assessing the evidence base for restoration in South Africa. Restor. Ecol. 2011, 19, 578–586. [Google Scholar] [CrossRef]
  16. Bash, J.S.; Ryan, C.M. Stream restoration and enhancement projects: Is anyone monitoring? Environ. Manag. 2002, 29, 877–885. [Google Scholar] [CrossRef] [PubMed]
  17. Christian-Smith, J.; Merenlender, A.M. The disconnect between restoration goals and practices: A case study of watershed restoration in the Russian River Basin, California. Restor. Ecol. 2010, 18, 95–102. [Google Scholar] [CrossRef]
  18. Caughlan, L.; Oakley, K.L. Cost considerations for long-term ecological monitoring. Ecol. Indic. 2001, 1, 123–134. [Google Scholar] [CrossRef]
  19. Havstad, K.M.; Herrick, J.E. Long-term ecological monitoring. Arid Land Restor. Manag. 2003, 17, 389–400. [Google Scholar] [CrossRef]
  20. Legg, C.J.; Nagy, L. Why most conservation monitoring is, but need not be, a waste of time. J. Environ. Manag. 2006, 78, 194–199. [Google Scholar] [CrossRef] [PubMed]
  21. Field, S.A.; O’Connor, P.J.; Tyre, A.J.; Possingham, H.P. Making monitoring meaningful. Austral Ecol. 2007, 32, 485–491. [Google Scholar] [CrossRef]
  22. Hobbs, R.J. Setting effective and realistic restoration goals: Key directions for research. Restor. Ecol. 2007, 15, 354–357. [Google Scholar] [CrossRef]
  23. Morton, S.R.; Hoegh-Guldberg, O.; Lindenmayer, D.B.; Olson, M.H.; Hughes, L.; McCulloch, M.T.; McIntyre, S.; Nix, H.A.; Prober, S.M.; Saunders, D.A.; et al. The big ecological questions inhibiting effective environmental management in Australia. Austral Ecol. 2009, 34, 1–9. [Google Scholar] [CrossRef]
  24. Roux, D.J.; Rogers, K.H.; Biggs, H.C.; Ashton, P.J.; Sergeant, A. Bridging the Science-Management Divide: Moving from Unidirectional Knowledge Transfer to Knowledge Interfacing and Sharing. 2006. Available online: http://www.ecologyandsociety.org/vol11/iss1/art4/ (accessed on 11 June 2010).
  25. Gibbons, P.; Zammit, C.; Youngentob, K.; Possingham, H.P.; Lindenmayer, D.B.; Bekessy, S.; Burgman, M.; Colyvan, M.; Considine, M.; Felton, A.; et al. Some practical suggestions for improving engagement between researchers and policy-makers in natural resource management. Ecol. Manag. Restor. 2008, 9, 182–186. [Google Scholar] [CrossRef]
  26. Biggs, D.; Abel, N.; Knight, A.T.; Leitch, A.; Langston, A.; Ban, N.C. The implementation crisis in conservation planning: Could “mental models” help? Conserv. Lett. 2011, 4, 169–183. [Google Scholar] [CrossRef]
  27. Urgenson, L.S.; Prozesky, H.E.; Esler, K.J. Stakeholder perceptions of an ecosystem services approach to clearing invasive alien plants on private land. Ecol. Soc. 2013, 18, Article 26. [Google Scholar] [CrossRef]
  28. Oxford Advanced Learner’s Dictionary. 2011. Available online: http://oald8.oxfordlearnersdictionaries.com/dictionary/adequate (accessed on 25 June 2013).
  29. Babbie, E.; Mouton, J. The Practice of Social Research; Oxford University Press: Cape Town, South Africa, 2001; pp. 163–206. [Google Scholar]
  30. Kapos, V.; Balmford, A.; Aveling, R.; Bubb, P.; Carey, P.; Entwistle, A.; Hopkins, J.; Mulliken, T.; Safford, R.; Stattersfield, A.; et al. Calibrating conservation: New tools for measuring success. Conserv. Lett. 2008, 1, 155–164. [Google Scholar] [CrossRef]
  31. Walpole, M.; Almond, R.E.A.; Besancon, C.; Butchart, S.H.M.; Campbell-Lendrum, D.; Carr, G.M.; Collen, B.; Collette, L.; Davidson, N.C.; Dulloo, E.; et al. Tracking progress toward the 2010 biodiversity target and beyond. Science 2009, 325, 1503–1504. [Google Scholar] [CrossRef] [PubMed]
  32. Tear, T.H.; Kareiva, P.; Angermeier, P.L.; Comer, P.; Czech, B.; Kautz, R.; Landon, L.; Mehlman, D.; Murphy, K.; Ruckelshaus, M.; et al. How much is enough? The recurrent problem of setting measurable objectives in conservation. BioScience 2005, 55, 835–849. [Google Scholar]
  33. Bernhardt, E.S.; Sudduth, E.B.; Palmer, M.A.; Allan, J.D.; Meyer, J.L.; Alexander, G.; Follastad-Shah, J.; Hassett, B.; Jenkinson, R.; Lave, R.; et al. Restoring rivers one reach at a time: Results from a survey of U.S. river restoration practitioners. Restor. Ecol. 2007, 15, 482–449. [Google Scholar] [CrossRef]
  34. Hassett, B.; Palmer, M.A.; Bernhardt, E.S. Evaluating stream restoration in the Chesapeake Bay watershed through practitioner interviews. Restor. Ecol. 2007, 15, 463–472. [Google Scholar] [CrossRef]
  35. Pullin, A.S.; Knight, T.M.; Stone, D.A.; Charman, K. Do conservation managers use scientific evidence to support their decision-making? Biol. Conserv. 2004, 119, 245–252. [Google Scholar] [CrossRef]
  36. Cabin, R.J.; Clewell, A.; Ingram, M.; McDonald, T.; Temperton, V. Bridging restoration science and practice: Results and analysis of a survey from the 2009 Society for Ecological Restoration International meeting. Restor. Ecol. 2010, 18, 783–788. [Google Scholar] [CrossRef]
  37. Fisher, R.J. Social desirability bias and the validity of indirect questioning. J. Consum. Res. 1993, 20, 303–315. [Google Scholar] [CrossRef]
  38. Rumps, J.M.; Katz, S.L.; Barnas, K.; Morehead, M.D.; Jenkinson, R.; Clayton, S.R.; Goodwin, P. Stream Restoration in the Pacific Northwest: Analysis of Interviews with Project Managers. Restor. Ecol. 2007, 15, 506–551. [Google Scholar] [CrossRef]
  39. Danielsen, F.; Mendoza, M.M.; Tagtag, A.; Alviola, P.A.; Balete, D.S.; Jensen, A.E.; Enghoff, M.; Poulsen, M.K. Increasing conservation management action by involving local people in natural resource monitoring. Ambio 2007, 36, 566–570. [Google Scholar] [CrossRef]
  40. Everson, T.M.; Everson, C.S.; Zuma, K.D. Community Based Research on the Influence of Rehabilitation Techniques on the Management of Degraded Catchments; Water Research Commission: Pretoria, South Africa, 2007. [Google Scholar]
  41. Independent Evaluation Group (IEG). Writing Terms of Reference for An Evaluation: A How-to Guide. 2011. Available online: http://siteresources.worldbank.org/EXTEVACAPDEV/Resources/ecd_writing_TORs.pdf (accessed on 15 February 2013).
  42. Ryder, D.S.; Miller, W. Setting goals and measuring success: Linking patterns and processes in stream restoration. Hydrobiologia 2005, 552, 147–158. [Google Scholar] [CrossRef]
  43. Slocombe, D.S. Defining goals and criteria for ecosystem-based management. Environ. Manag. 1998, 22, 483–493. [Google Scholar] [CrossRef]
  44. Ludwig, D. The era of management is over. Ecosystems 2001, 4, 758–764. [Google Scholar] [CrossRef]
  45. Gonzalo-Turpin, H.; Couix, N.; Hazard, L. Rethinking partnerships with the aim of producing knowledge with practical relevance: A case study in the field of ecological restoration. Ecol. Soc. 2008, 13, Article 53. [Google Scholar]
  46. Cohn, J.P. Citizen Science: Can volunteers do real research? BioScience 2008, 58, 192–197. [Google Scholar] [CrossRef]
  47. Sunderland, T.; Sunderland-Groves, J.; Shanley, P.; Campbell, B. Bridging the gap: How can information access and exchange between conservation biologists and field practitioners be improved for better conservation outcomes? Biotropica 2009, 41, 549–554. [Google Scholar] [CrossRef]
  48. Esler, K.J.; Prozesky, H.; Sharma, G.P.; McGeoch, M. How wide is the “knowing-doing” gap in invasion biology? Biol. Invasions 2010, 12, 4065–4075. [Google Scholar] [CrossRef]
  49. Folke, C.; Hahn, T.; Olsson, P.; Norberg, J. Adaptive governance of social-ecological systems. Annu. Rev. Environ. Resour. 2005, 30, 441–473. [Google Scholar] [CrossRef]
  50. Armitage, D.; Marschke, M.; Plummer, R. Adaptive co-management and the paradox of learning. Glob. Environ. Chang. 2008, 18, 86–98. [Google Scholar] [CrossRef]
  51. Williams, B.K. Passive and active adaptive management: Approaches and an example. J. Environ. Manag. 2011, 92, 1371–1378. [Google Scholar] [CrossRef] [PubMed]
  52. Ingram, M. Editorial: You don’t have to be a scientist to do science. Ecol. Restor. 2009, 27. [Google Scholar] [CrossRef]
  53. Segan, D.B.; Bottrill, M.C.; Baxter, P.W.J.; Possingham, H.P. Using conservation evidence to guide management. Conserv. Biol. 2010, 25, 200–202. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Ntshotsho, P.; Esler, K.J.; Reyers, B. Identifying Challenges to Building an Evidence Base for Restoration Practice. Sustainability 2015, 7, 15871-15881. https://doi.org/10.3390/su71215788

AMA Style

Ntshotsho P, Esler KJ, Reyers B. Identifying Challenges to Building an Evidence Base for Restoration Practice. Sustainability. 2015; 7(12):15871-15881. https://doi.org/10.3390/su71215788

Chicago/Turabian Style

Ntshotsho, Phumza, Karen J. Esler, and Belinda Reyers. 2015. "Identifying Challenges to Building an Evidence Base for Restoration Practice" Sustainability 7, no. 12: 15871-15881. https://doi.org/10.3390/su71215788

Article Metrics

Back to TopTop