Next Article in Journal
Could Mergers Become More Sustainable? A Study of the Stock Exchange Mergers of NASDAQ and OMX
Next Article in Special Issue
Evaluating the Transition Towards Post-Carbon Cities: A Literature Review
Previous Article in Journal
Problems and Challenges: A Private Forest Purchase Method for National Forest Expansion in South Korea
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Can I Help You? Questioning the Role of Evaluation Techniques in Democratic Decision-Making Processes

1
Department of Regional and Urban Studies and Planning (DIST), Politecnico di Torino, Viale Mattioli 39, 10122 Turin, Italy
2
Department of Architecture and Urban Studies, Polytechnic University of Milan, via Bonardi 3, 20133 Milan, Italy
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(20), 8568; https://doi.org/10.3390/su12208568
Received: 7 August 2020 / Revised: 12 October 2020 / Accepted: 14 October 2020 / Published: 16 October 2020

Abstract

:
In the past, evaluation techniques were considered to be “decisional techniques”, “decisional tools”. There was a rough idea that, after the important data had been collected, the technique in question would, by itself, indicate the best decision. Evaluations of this kind clearly depended on the more or less implicit adoption of a “rational-comprehensive model”, which tended to downplay the ethical and political dimension of decisions, while stressing the role of both technique and technicians. This approach has been widely criticized. Partly as a result of such criticism, many evaluation techniques are now considered to be not “decisional tools” but forms of “decision aid”. The problem is that the expression “decision aid” lacks clarity and is by no means unequivocal in urban decisional situations. We believe in this regard that there is a gap in research and in the academic literature. Starting from this conviction, the article presents an investigation of what being a “decision aid” might mean for a technical evaluation today. The aim is to provide a conceptual framework within which to critically revisit and rediscuss the question, with particular regard to urban sustainability issues.

1. Introduction: From Decisional Techniques to Decision-Aid Techniques

As Ernest House [1] (28) notes, all evaluation approaches assume that there is a connection between decision-making and evaluation. How this connection has been interpreted, however, has changed significantly over time. We can broadly distinguish two main views: the traditional view, and a more recent one.
Traditionally, evaluation techniques were considered “decisional techniques”, “decisional tools”. There was a rough idea that, after the important data had been collected, the technique in question would, by itself, indicate the best decision: for instance, the preferable choice among several alternative solutions. This was the assumption behind the first evaluative analyses of an economic character. Evaluations of this kind clearly depended on the more or less implicit adoption of a rational-comprehensive model, which tended to downplay the ethical and political dimension of decisions, while stressing the role of both technique and technicians [2,3]. That approach has been widely criticized since the influential work of Lindblom [4,5]. Radaelli and Dente [6] call this early phase of evaluation research an “age of innocence”.
Partly as a result of such criticism, many evaluation techniques are now considered to be not “decisional tools” but forms of “decision aid/support”. Among the first methods to appear in this “new attire” were the Planning Balance Sheet Analysis proposed by Nathaniel Lichfield [7,8,9,10] and the Goals-Achievement Matrix introduced by Morris Hill [11]. These techniques were recommended from this new viewpoint precisely because they were seen as means to overcome the hard rationale of certain traditional techniques.
During the subsequent decades, practically all evaluation techniques were developed—and expressly presented—as methods designed to aid decision-making (see e.g., [12,13,14,15,16,17,18,19,20,21]). Today, the terms “(decision) aid” and “(decision) support” are increasingly common in the acronyms denoting the new approaches. Examples include DSS (Decision Support Systems) [22], SDSS (Spatial Decision Support System) [23], CSDSS (stakeholder-driven Collaborative Spatial Decision Support System) [24], and MCDA (a classic acronym that some reinterpret today as Multi-Criteria Decision Aid) [25]. As Mahmassani and Krzysztofowicz [12] (p. 194) observe, the use of properly designed decision aid techniques “can help analysts and decision-makers focus their limited attention, information-processing capabilities, and resources on essential elements of the evaluation, thereby improving the efficiency and effectiveness of the decision-making process” (compare with [26]).
In this new perspective, evaluators are “advisors to decision-makers”; they are information generators, processors, and analysts, and facilitators of the public dialog [27] (p. 297).
The problem is that the expression “decision aid” lacks clarity and is by no means unequivocal, for instance, with reference to urban decisional problems. We believe that there is a gap in research and in the academic literature in this regard. Starting from this conviction, this article presents a critical discussion of what being a decision aid might mean for a technical evaluation today. Section 2 identifies four possible ways in which evaluation techniques can “help” decision-making. We will call them filtering, structuring, prioritizing, and involving. Section 3 shows how certain evaluation techniques (e.g., Problem Structuring Methods, Discounted Cash Flow Analysis, Cost-Benefit Analysis, Multicriteria Decision Analysis) work with regard to these four dimensions. Section 4 continues the discussion by underscoring how the current redefinition of boundaries between technical and political responsibilities entails specifying what (technical) “aid” might mean. Section 5 concludes by highlighting the main findings and possible further directions for research.
The article is mainly conceptual and is based on an extensive literature review of decision-aiding methods and techniques. The aim is to revisit the idea itself of decision aid techniques, providing a theoretical framework within which to critically re-discuss this issue.
In general terms, our discussion assumes that the focus is on evaluation techniques which support: (i) public decisions (i.e., authoritative decisions that will be binding for everyone [28]; (ii) decisions made in the framework of a constitutional democracy (i.e., in an institutional setting where public decision-makers are elected and work within the restrictions of general constraints and counterweights [29]); and (iii) decisions focused on urban transformations, which have huge social and environmental impacts.
As is well known, urban development is a crucial factor in environmental sustainability. In recent years, the concept of “sustainability” in relation to urban transformations has become increasingly complex. It now encompasses a multiplicity of different aspects: environmental, technical, physical, economic, energy-related, and social. When discussing transformations linked to the city and the urban region, therefore, it is necessary to focus attention on the multidimensional concept of sustainable development, with account taken of the entire range of relationships that can influence the urban system and the local communities involved. Every urban transformation project is in fact characterized by the mediation among the needs of the client, institutional restrictions, financial limits, the respect for certain parameters of environmental protection, and the response of society. It is therefore necessarily generated by a set of intentions, projects, and concrete actions carried out by a multiplicity of actors, whose choices overlap, and sometimes contradict each other.

2. Preliminary Conceptualization: Identifying Four Dimensions of Aid

In regard to urban transformations, the participants in the decision-making process often disagree on what priorities to pursue, or even on the true nature of the problem itself.
In these cases, there is usually a plurality of actors with no subordinate relationships among them, a high degree of autonomy, and their own interests and perspectives that induce them to pursue different objectives (and to identify different elements of the problem as “key factors”). The potential conflict is further exacerbated by the high level of uncertainty in making decisions [30].
Several aspects characterize decisions about urban transformations, and two in particular suggest the kind of help/aid that decision-makers may actually need: the temporal dimension and the cause-effect relation. In regards to the former issue, urban transformation operations are often fragmented processes, with different time perspectives for various operators. Each evaluative approach requires a specific metric to render the results achieved objectively measurable and comparable. Accordingly, the challenge is to coordinate different operations, which have different amounts of potential in terms of time, profitability, and values [31]. In regards to the second issue, the point is that the effect of a plan or project cannot be determined linearly from the original schema: the final state achieved does not always fully correspond to the expected one [32,33,34]. Moreover, the effect of an urban regeneration operation can be something more than physical transformation of the urban fabric: consider the narratives in newspapers and social media, the interaction with other projects, etc. None of this can be readily measured or negotiated [35].
Forms of support for the decision-making process regarding urban issues can be grouped into different categories, depending on the type of aid that the evaluation processes can provide to the decision-maker. We suggest distinguishing four main dimensions of aid:
-
Filtering information (Fi);
-
Structuring the problem (Sp);
-
Prioritizing options (Po);
-
Involving the public (Ip).
Filtering can be defined as the process of selecting information. It is related to the availability of data, and to the input required by the method which will be applied.
Structuring is related to the definition and representation of the decisional problem. Problems are never given. They are not simply out there waiting to be “solved”. They are always the result of evaluations (made jointly by the expert and the client), concerning for instance: the limits/boundaries to be considered (what aspects of the situation are to be included and what are to be excluded); what factors are the most worrying; what objectives are to be achieved; the timing; etc. These non-strictly objective decisions determine the nature of the problem that ultimately needs to be addressed. Only after structuring the problem is it possible to use formalized models [36].
Prioritizing, in general terms, is the activity that arranges items or activities in order of importance relative to each other. On applying evaluation techniques, there can be several situations: they range from a final ranking of alternatives to situations where the method must be applied several times to the various alternative scenarios so that solutions can be compared and ordered.
In regards to involving, it should be borne in mind that some methods are inclusive and participative right from the design and processing phase, whereas others consider participation only after the output has been produced.
We will now consider these four dimensions in greater detail.

2.1. First Dimension: Filtering

Many theories on information management and filtering distinguish among data, information, and knowledge. Information is a flow of messages, while knowledge is considered to be the information embedded in people that is created through a process of social interaction [37].
Ackoff [38] defines a hierarchy of data, information, knowledge, and wisdom. Wisdom stands at the top of the hierarchy, followed by knowledge, information, and data, in that order. Data are considered products of observations and interactions, which have no value until they are processed and transformed into information [39] that can be used in the decision-making process. By refining the information, we move on to the knowledge that enables us to control a system, assessing it and eliminating errors in order to make it work properly. Lastly, wisdom is related to the ability to see the consequences of a long-term act [39].
Thus, the basic assertion is that data are used to create information; the latter is used to create knowledge; and this in turn is used to create wisdom [40]. The transition from raw data to wisdom—that is, the process whereby data are transformed into more complex phenomena—takes place through filtering, reduction, and refinement [39].
Data do not have meaning in their raw state. They are products of observation that are given as symbols that represent the properties of objects, events, and environments [40]. What makes data useless and worthless is that they are without context and interpretation: they arise from elementary, unprocessed observations that enable us to record events, things, activities, and clues that are, however, without a specific meaning [40]. Whether they become information hinges on the understanding of the individual who looks at them from a functional standpoint [37].
Information is contained in answers to questions about “who”, “what”, and “when”: they are derived from the data [40]. What turns data into information is the fact that they are related to a context, and that the relationships between the data and this context or situation are analyzed and understood [37] in order to make decision-making easier [41].
The transformation of each element to the next level (i.e., from data to information, from information to knowledge and from knowledge to wisdom) involves understanding relationships, models, and principles respectively [40]. Data lack meaning and value in themselves. They provide the basis for information, but they have no elaboration or organization. Hence, they must be converted into information through classification, sorting, aggregation, and selection. The processes that then enable information to be converted into knowledge involve organization and processing, use of cognitive frameworks, synthesis of multiple sources of information, and structuring of experiences [42]. The set of values, experiences, rules, and expert opinions makes it possible to achieve knowledge. The latter is thus processed information that increases the ability to interpret effective actions.
The transformation entails collecting and organizing data, the synthesis of which allows the information to be analyzed and summarized before action occurs. Through this process, the data are contextualized by structuring the consequent action, which is the basis of the decision-making process [37].
A large number of digital technologies and tools have been produced for the collection and organization of data [43], including Decision Support Systems (DSS), and Planning Support Systems (PSS). As the literature has pointed out, however, such technologies are difficult to apply in social processes such as spatial planning [44,45].

2.2. Second Dimension: Structuring

Structuring is considered to be the “artistic” part of decision analysis; an imaginative and creative process aimed at translating an ill-defined problem into a set of well-defined relationships [46].
Structuring decision-making problems—a process that starts with a vague and ill-formulated problem and results in one that is modeled and analyzable—is the most important and crucial step in decision analysis [47]. A decision-maker’s motivation to ask for decision support generally has five reasons: taking new opportunities into account; the need for growth and expansion; controversies between different stakeholders; confusing, unknown, and sometimes conflicting facts; and, lastly, accountability requirements that require choices to be made on the basis of appropriate documentation [48]. Thus, the most common initial condition consists of concerns, needs, or opportunities that may be vague (and it is not known how to respond to them with a course of action that does not omit relevant aspects).
Phillips [49] suggests the concept of “requisite decision model”; that is, a model whose form and content are both sufficiently complete to solve a problem. The notion of modeling can be applied to the structure of decision analysis, which requires that structural representations be sufficiently simple and non-complicated in order to grasp the essence of the problem while also producing solid insights [50]. Thus, structuring the problem emphasizes the formulation of statements by decision-makers about their goals, interests, and concerns, and it transforms these statements into clear and transparent representations that can be formalized mathematically [50].
Problem structuring is considered to be not only a creative phase but also the most complex stage in the development of decision support systems [51,52]. Structuring primarily involves identifying problem elements—such as events, values, actors, decision-making alternatives—and their influence relations [48]. In this way, the analyst tries to identify and take into account both the objective parts of the problem and the subjective ones (such as the values and opinions of the actors involved). After identifying a set of multiple alternatives and objectives [53], the analyst will need to identify uncertainties about the outcomes of possible options and what actions should be taken to reduce these uncertainties.
In order to address all these aspects, the most important decision an analyst needs to make is therefore to choose an appropriate analytical structure before starting the numerical modeling and analysis [50].
Problem structuring can be summarized in three steps [48]:
  • Identify the problem. Often, when decision-makers turn to a decision analyst they have only a general idea of the problem. Hence, at this stage it is necessary to investigate the nature of the problem, the decision maker’s values, what stakeholders are involved in the decision, and what the purpose of the analysis is. This step can take the form of simple lists of alternatives and objectives.
  • Choose an analytical approach. In this second step, the analyst investigates the uncertainties and the conflicting values involved in the analysis. It is thus necessary to understand which analytical decision approaches can be used—and sometimes creatively combined—to explore the alternatives for solving the problem in detail [54].
  • Develop a detailed analysis structure. In this step, the focus is on developing a detailed structure for evaluating the problem. For example, hierarchies of priorities among the objectives (which characterize the different projects) will be defined, and criteria will be determined in order to assess how well the projects have contributed to accomplishing the overall objective [50].
This three-step structuring process rarely follows a strict sequence.

2.3. Third Dimension: Prioritizing

The future is created on the basis of decisions, and a credible future is based on values and priorities. It is thus necessary to be able to address the variety of factors effectively [55]. There are many factors that influence decisions and the results of the decision-making processes; and often out of impatience we think we can reduce these many diverse factors to only a few (i.e., the ones that we consider important at a certain moment). The reality is that many of these factors may not be so crucial, and actually have a low priority, while others may be very influential [55].
Evaluation priorities are set in many ways, ranging from a simple list to more sophisticated approaches that combine different parameters, criteria, and evaluation tools [56]. The latter is the case where multiple criteria are used to select and prioritize problem-solving alternatives.
The question that arises is this: how can priorities be assigned to different factors whose importance can change by many orders of magnitude [57]?
The first concern when making a decision is what to include and where to include it. A widespread approach taken to this issue is to define a hierarchy, for two reasons: first, because it gives an overview of the complex relationships in a situation; second, it helps decision-makers evaluate problems at every level of the hierarchy (thus enabling them to accurately compare homogeneous elements belonging to the same order of magnitude) [58].
If a hierarchy is not considered, each alternative can be evaluated with a different performance for each criterion (which are compared one by one) [59]. By introducing a hierarchical structure, the complex problem is broken down into sub-problems, making it possible to compare fewer criteria. The hierarchical structure of an evaluation problem requires that criteria be arranged in ascending order according to the level of abstraction: the higher elements are more abstract and general and are the main objective, while the lower ones are more concrete and particular, and will be the basis on which alternatives are actually compared.
The most commonly used scheme is to have a set of alternatives that are compared in pairs, according to a hierarchy of criteria and based on the decision-makers’ preferences. This pair system is adopted because it is easier to identify preferences by considering a pair of alternatives instead of more than two alternatives all together [60].

2.4. Fourth Dimension: Involving

An urban development project consists of several steps (e.g., preparation, planning, implementing) whereby specific results can be achieved through proposals, plans, changes, and recommendations [61]. Urban development includes issues such as transport, land use, infrastructure, housing, economic development, and especially sustainability, which require that a wide range of actors be included in the process [61,62]. The actors that influence or are influenced by urban planning are many and highly fragmented. Hence, the purpose of involving them should be to obtain input on concerns and priorities concerning the issues that need to be addressed and resolved.
Many multi-sectoral projects do not create a substantial role for citizens; but in many cases, solving these complex problems revolves around direct citizen participation [63].
An urban project is a multi-agent process [64]. It is supervised by strategic agents, such as public authorities and investors who apply or influence regulations. It is implemented by operational agents, such as planners who involve stakeholders through participatory processes. The stakeholders include not only public administrations and investors but also citizens, residents, organizations, and businesses.
Stakeholders can participate in one, several, or all phases of project development (preparation, planning, implementing, and evaluating), doing so in many ways, such as public meetings, conferences, focus groups, or workshops [64,65].
In recent decades, a number of approaches have been used to support decision-making by involving stakeholders in the planning process, and thus combine different types of knowledge to enhance the understanding of complex issues [66]. Involving social actors enriches planning processes. Their participation can be described as the engagement and interaction among the actors, actions, and decisions [61]. Participation, indeed, is often associated with the concepts of democracy and justice: it is the element that makes it possible to achieve socio-cultural and economic objectives [61,67].
From this perspective, citizens can: (i) help professionals to understand and frame the problems in question more accurately; (ii) help to judge the ethical or material tradeoffs needed to make a decision; (iii) provide important information for building solutions and assessing possible intervention scenarios [63].
Considering stakeholder theory in Operational Research, three issues can be emphasized: stakeholder theory can be applied as instrumental or moral theory; it can focus on tradeoffs or focus on avoiding them; and, lastly, it can focus on the decision-making organization or on engaging stakeholders [68]. Accordingly, three different approaches can be taken.
Adopting an “instrumental theory” means that stakeholders are involved in order to have some return on the project; instead, adopting a “moral theory” means focusing on the stakeholders because it is considered the right thing to do. Choosing to act according to one theory or the other has implications for decisions: if an instrumental theory is followed, only those stakeholders who can influence project performance are included, while in the case of moral theory a larger set of stakeholders is taken into account (including those who do not have the power to influence the final performance, but who can nevertheless be affected by it).
The second question relates to the difference between focusing on tradeoffs and avoiding them. In the former case, alternatives are considered as given; in the latter, case stakeholders are encouraged to look for new solutions which match the interests of all stakeholders. The choice between these two lines of action is important because, if tradeoffs are sought, the problem of which stakeholders have priority over the others will arise, while avoiding tradeoff makes it possible to ignore this issue.
Thirdly, studies on stakeholder theory show that there is a wide difference between the interests that planners/organizers consider to be important for stakeholders and their real interests [68,69]. This misidentification of interests can lead to problematic decisions when the process is implemented.
To conclude, those who emphasize the aspect of involvement require evaluators to provide “for an equal expression of the participants’ points of view and to organize the confrontation of interests”; their role “is to mediate, to facilitate by proposing methods and tools as an aid to negotiation” [70] (p. 354).

3. Rediscussing Evaluative Techniques in Terms of the Four Dimensions of Aid

Evaluation takes place in all phases of decision making about urban transformations, and several techniques and tools are available, depending (i) on the phase in which the evaluation takes place (before, during, or after the completion of an urban transformation), (ii) on the accessible data and, above all, (iii) on the purpose of the evaluation.
To pursue sustainable interventions in urban settlements, monetary and non-monetary evaluations are usually used in the ex-ante phase. A monetary evaluation is characterized by an attempt to measure all effects in monetary units, whereas a non-monetary evaluation utilizes a wide variety of measurement units.
In particular, four types of evaluation analysis are often used in this context: Problem Structuring Methods; Discounted Cash Flows Analysis; Cost-Benefit Analysis; and Multicriteria Decision Analysis. In what follows, these evaluation methods are observed through the lens of the above-mentioned four dimensions of aid.

3.1. Problem Structuring Methods

Problem Structuring Methods (PSMs) are participative and interactive techniques that focus on structuring problems rather than solving them directly [30]. PSMs were developed to bridge the gap between traditional Operational Research and decision analysis in order to address complex, ill-structured problems better. These problems are called “wicked problems” [71]; that is, complex problems for which there is no simple method of solution. Design studies received considerable attention during the 1960s and 1970s when Horst Rittel proposed this notion of wicked problems, arguing that most designers deal with this kind of problematic situation [72,73]. Indeed, design research and practice normally tackle ill-structured, ill-formulated problems (such as transforming historic urban areas, or deciding on a transportation policy, or defining policies to face climate change) [31]. In particular, “the problem for designers is to conceive and plan what does not yet exist” [73]. Designers always try to describe and control what is yet to happen by imagining the implications of choices, the possible consequences of different alternatives, and their potential links and associations [74,75]. Even if the final result and the future are unknown, it is still possible to investigate strategic approaches to managing uncertainties about future events and consequences of choices made in the present [76,77,78].
The assumption that led to the development of PSMs was that in real-world situations it is not always possible to find a single uncontested representation of the problem situation under consideration. To deal with such situations, PSMs were designed to represent problems by recognizing multiple perspectives [77]. A representation was necessary at an early stage to cover most of the characteristics that impacted on these systems, using visual, rather than analytical, models to enable: (i) understanding and discussion of the problem, (ii) increasing engagement, and (iii) identification of potential improvements.
PSMs assign low importance to Fi (Figure 1): there are no specific initial requirements for applying this method. In fact, data of any kind can be used, quantitative or qualitative, precise or rather vague, in any context. They can consider: the costs of an intervention; the geographical location; the architectural aspects of a building (viz. the parking ramp, the type, the construction materials, the heating system); the time of the intervention; the energy consumption, the management methods, etc. For example, the Strategic Choice Approach (SCA), which is the PSM mainly used in urban and architectural fields, recognizes the presence of different levels of uncertainty in the decision-making process, managing them with the means available to the decision-maker. More precisely, three types of uncertainty are identified. They concern the working environment, the guiding values, and the related decisions.
As their name implies, these techniques assign very high importance to Sp, since structuring is their essential purpose. Several steps and specific forms of representation guide those who apply the method to illustrate also graphically the fundamental elements of the decision problem. We mention by way of example, the “decision graph”, the “option graph”, the “decision scheme” in the SCA, and the “rich picture” in Soft System Methodology.
Po is assigned medium-low importance, because PSMs allow comparison among alternatives, but in qualitative terms and with a certain degree of approximation. Again, in the SCA, the concepts of “comparison area”, the “relative assessment”, and the “advantage comparison” are explicitly very general; and can be adapted to guide the work of the comparing mode at a variety of levels, ranging from a rough definition of “pros” and “cons” to a very detailed quantification.
PSMs assign very high importance to Ip. Involving is crucial precisely because PSMs first arose as participatory techniques enabling participants to clarify their values, converge on a potentially actionable mutual problem, and agree on commitments that will at least partially resolve it. PSMs were born to be cognitively accessible to actors with a range of backgrounds and without specialist training, so that the developing representation can inform a participative process of problem structuring [77]. Involving occurs in all three IPO phases (Input, Processing, Output), but mainly in the first two.

3.2. Discounted Cash Flow Analysis

Discounted Cash-Flow Analysis (DCFA) is a quantitative economic evaluation technique. It is one of the forms of financial evaluation that analyzes the present net value of an investment in a project [79,80,81].
The DCFA is presented as a matrix where the rows show incoming and outgoing financial flows, and the columns show the periods of time into which the project is divided, a duration which is set arbitrarily according to the time that it is estimated the project will take.
A project’s profitability is evaluated by discounting cash flows. Two synthetic indicators of financial profitability are obtained: Net Present Value, which is the sum of all discounted future cash flows, and the Internal Rate of Return, which is the annualized rate of earnings on invested capital.
The return is calculated by considering the revenues and costs, and the difference between them, for each time period. The flows thus obtained must be discounted to present value, because differences between revenues and costs at different periods cannot be compared. They cannot simply be added up because there is compound interest, so an appropriate discount factor is applied to reduce the future financial value of an investment to its current value.
DCFA assigns high importance to Fi (Figure 2): only quantitative monetary data can be used. More in detail, in an urban transformation, it is necessary to estimate the costs of acquisition of the property (building or land to be converted), of the remediation, construction, design, supervision of works, etc. Similarly, in regards to revenues, it is necessary to make appraisals of possible sales/lease prices, sales time, quantities sold. Managing missing data is very difficult. For this reason, it is necessary to perform accurate market research and obtain all the data required for the analysis.
Low importance is given to Sp because the problem is already structured in terms of the costs and revenues that the project/plan will generate. The more detailed the urban transformation project is, the more accurate the cost estimate will be and the more likely the market analysis will be. The only hypothesis that must be advanced concerns the operation’s periodization and overall duration.
In regards to Po, DCFA assigns it high importance, given that this technique gives a clear indication of whether or not the operation is feasible (and the indicators are all monetary, referring to actual flows). For example, when comparing two alternative projects to transform an area for the same investment, DCFA suggests choosing the project that generates the highest profitability. The only difficulty is that several applications are needed to compare different scenarios.
DCFA assigns a low importance to Ip. Once the information has been collected, the method is applied by the evaluator, and the public can be involved in participatory discussion (of the results) only at the output stage.

3.3. Cost Benefit Analysis

Cost-Benefit Analysis (CBA) is conducted to find, for a given problem, the solution that will achieve the greatest overall societal welfare. CBA is a systematic and analytical process of comparing the benefits and costs of a project, often of a social nature. It is a formal technique for making informed decisions on the use of society’s scarce resources [82].
A CBA should include all the benefits and costs associated with an action, whether those goods are marketed (and therefore have price tags), or whether they are outside normal market operations (for instance, air quality or climate change). CBA is usually conducted to estimate environmental assets in a hypothesis of urban and territorial transformation because it can take into account also the externalities generated by the intervention. Externalities are the effects—advantageous (positive externalities) or disadvantageous (negative externalities)—exerted on the production or consumption activity of one individual by the production or consumption activity of another individual, which are not reflected in the prices paid or received. Positive externalities include, for example, an increase in real estate values in the presence of historical-architectural or landscape resources, local development caused by the presence of commercial activities, etc. Negative externalities include damage to historical assets caused by overcrowding, water pollution associated with the use of pesticides in agriculture, etc.
When it is not possible to assign a market value to a given impact, the value is estimated directly by using stated preference methods such as willingness-to-pay and willingness-to-accept, or indirectly by using revealed preference methods like hedonic pricing [83].
CBA assigns medium importance to Fi (Figure 3) because monetary and non-monetary data are considered. This assessment is linked to the fact that, on the one hand, the spectrum of information that can be included in this analysis is very broad; on the other hand, it is more difficult to apply than DCFA, because special procedures must be followed to convert all data into monetary flows. As part of an urban redevelopment project, to be estimated on the cost side are, for example: direct costs (real estate redevelopment costs); indirect costs (inconvenience incurred by the regular users of that specific urban area); intangible costs (inconvenience due to noise and more generally to pollution generated by construction sites). As far as the benefits are concerned, these comprise direct benefits (increase in value of the buildings after the redevelopment project), indirect benefits (improvement of safety conditions in the requalified urban environment), and intangible benefits (contribution to reducing social degradation).
Sp is assigned medium importance. Although the structuring of the problem is apparently very well-defined (as in the DCFA), actually calculating direct costs and benefits and indirect costs and benefits—for which there is no active market (e.g., environmental goods or valuable architectural assets)—requires definition of their “shadow price” (i.e., an estimate of the probable value reflecting the real scarcity of the asset). This operation is complex and delicate.
CBA assigns high importance to Po because the output is a number that “looks” like currency. In fact, the weak aspect of this technique is that, being mainly based on monetization, it can cause a distortion of the values at stake.
In regards to Ip, CBA assigns it low importance. If involving takes place, it will only be after the technique has been applied by the experts that the output can be used in discussing the decision tables.

3.4. Multicriteria Decision Analysis

In general, multicriteria evaluation “is primarily regarded as an aid in the process of decision-making and not necessarily as a means of coming to a singular optimal solution” [17] (p. 172). In fact, it is generally recognized that MCDA can provide useful support in structuring decision processes involving urban transformations because it enables several aspects to be considered in a complex situation [84,85]. MCDA helps the decision-makers in considering qualitative and quantitative aspects and different points of view, as well as in integrating different options. However, since there are many kinds of MCDA, careful attention should be paid to selecting the method most suitable for the decision context analyzed [86].
Because MCDA encompasses a large family of techniques, we will apply our scheme to two related methods much used particularly in the context of the evaluation of sustainable urban and territorial projects: the Analytic Hierarchy Process (AHP) and the Analytic Network Process (ANP).
The AHP [58,87] is a MCDA method based on ratio scales for producing performance scores for the criteria considered and determining their importance. AHP structures the problem at hand hierarchically, where the overall goal is at the top of the hierarchy, and the alternatives to be decided are at the bottom. The criteria used to evaluate the alternatives are in the middle of the hierarchy, between the overall goal and the alternatives themselves. AHP uses a system of pairwise comparisons to weight the criteria and rank the alternatives. The basic idea of the methodology is to transform an objective numerical evaluation for a criterion into a subjective measure of attractiveness.
The ANP is a generalization of the AHP. The basic structure is an influence network of clusters and nodes contained within the clusters. Priorities are established in the same way as in the AHP, using pairwise comparisons and judgment. Many decision problems cannot be structured hierarchically because they involve the interaction and dependence of higher-level elements in a hierarchy on lower-level elements. Not only does the importance of the criteria determine the importance of the alternatives as in a hierarchy, but also the importance of the alternatives themselves determines the importance of the criteria [88].
MCDA assigns low importance to Fi because both qualitative and quantitative data can be used (Figure 4). MCDA was created to supersede use of the exclusively monetary indicator to evaluate complex projects. Thus, in urban areas it enables account to be taken of all aspects in their own unit of measurement. Transport aspects can be considered (e.g., the distance between underground stations, frequency of vehicles, etc.); environmental aspects (e.g., type of land reclamation required; impacts on local fauna, etc.); architectural aspects (e.g., aesthetics of a building or a bridge; choice of building type), and so on.
Sp is assigned high importance. This is a fundamental part of the method’s success. The MCDA in fact analyzes the objectives through the structure of a complex problem. The criteria for the selection of alternatives are expressed in specific terms of targets and not in terms of basic rules. The structure and the model underlying the articulation of objectives play an important role and, moreover, the approach is closely linked to the analysis of the preferences of decision makers.
MCDA assigns high importance to Po because the method’s output, in the case of both AHP and ANP, is a ranking of alternative solutions to the problem. Such a result, with a ranking of alternatives obtained according to the decision-maker’s preferred judgements, is particularly effective because it is easily understandable and communicable.
MCDA also assigns high importance to Ip because it is a participatory tool by definition. A participatory approach, involving users, planners, and decision-makers at all levels, is one of the factors in the dissemination and success of MCDA. In fact, those involved in a decision-making process are often not able to specify, at first approximation, all the requirements and expectations with respect to the problem to be assessed. Their continuous involvement affords greater understanding of the problem itself, encouraging commitment to a solution.

4. Discussion: Where to Draw the Line between Technical Aid and Political Decisions (and What Kind of Line)

After revisiting certain decision-aid evaluation techniques and their support role, it might be interesting to reverse our basic starting question by asking: in what way does the public decision-maker (in a constitutional democracy) need aid (e.g., to decide about urban issues that have an important social and environmental impact)?
Obviously, any public decision needs to be underpinned by theories and empirical evidence, such as, for instance: “If you do X (e.g., you create a certain urban expansion), Y will happen (e.g., there will be a certain environmental impact)”. But does the decision-maker need something more?
The point is that, today, all evaluation techniques either implicitly or explicitly assume that this is the case and focus on doing and offering more. Specifically, they do not focus simply on providing explanatory theories and empirical evidence about certain phenomena (urban and environmental, for instance), but rather on organizing and structuring elements and aspects of the decision-making process itself.
In other words, the aim is not simply evidence-based decision-making (an idea that has often been applied in an orthodox positivistic perspective [89]), but, rather, appropriately framed, appropriately conducted, decision-making [90].
The crucial question is therefore: Where can we draw the line between decisional responsibility (and competence) assigned to public decision-makers and the responsibility (and competence) of technicians in assisting/supporting them? Specifically, how can policy and science/technique be successfully and effectively combined today? Unfortunately, the tragic events of the COVID-19 pandemic have shown that none of this can be taken for granted [91].
This article does not claim to provide a direct answer to this crucial background question; rather, it wishes to suggest that, today, such an answer would require a critical debate and clearer understanding of what “aid” (to decision-making) can actually mean.
As the article has sought to show, the point is not so much the type of relationship between “evaluation” and “decision” (as is often repeated and overgeneralized in the literature on urban and environmental issues), but between “technical evaluation” and “political evaluation (and decision)”. In other words, some type of evaluation is always involved, whether in the technical sphere or in the political one.
Hence, the issue is not to distinguish between a completely neutral role—that of “technicians”—and an intrinsically non-neutral one—that of “politicians”. (For a critical assessment of this traditional view, see [92].) Instead, we need to distinguish between the roles and responsibilities of “non-elected experts”, and the roles and responsibilities of “elected decision-makers” [93]. We also need better understanding of how the former can actually assist and support the latter, obviously being unable to replace them; and without providing them with excuses to shirk their unavoidable decisional responsibilities [94].
In the end, evaluation cannot substitute the decision-making process. As a consequence, the real critical concern is the actual impact of evaluation on the decision process [6] (p. 64).

5. Conclusions

There is a recurrent question in the field of evaluation techniques that is crucial also when addressing urban sustainability issues: Is the role of analysts/evaluators (i) to make the decision themselves by recommending a specific course of action, or is it rather (ii) to present the problem and its implications? [95] (p. 109).
The first option is the one traditionally adopted by evaluation techniques considered as “decisional techniques”. As indicated in the Introduction (Section 1), this position is open to criticism, although the role of evaluation techniques could be easily specified from this perspective.
The second option is the one more recently adopted (and critically presented and discussed in this article: Section 2, Section 3 and Section 4) according to which evaluation techniques are instead “decision-aid techniques”. This recent substantial change in the meaning of evaluation techniques has made their role less clear, or at least less intuitively understandable. While certain evaluation techniques have become particularly sophisticated today, their actual role as decision aids has not been explored in detail.
Recognizing this gap, this article has attempted to (i) identify and distinguish four possible dimensions of “aid” (i.e., filtering, structuring, prioritizing, involving), and in this light to (ii) critically re-examine some evaluation techniques (i.e., PSMs, DCFA, CBA, MCDA). As we have seen, different evaluation techniques assign different importance to the various dimensions of aid identified. Hence, using one technique instead of another depends on the type of aid one would expect and that can be obtained. This is a crucial question in political decisions regarding urban sustainability issues, where the traditional linear economic model “take-make-dispose” based on the possibility of having access to large quantities of resources and energy is no longer suitable. The most recent debate on the sustainability of the urban environment focuses on the possibility of designing cities on the basis of the concept of circular economy. In a complex strategy where the different “linear economies” of the various actors involved have to be combined, it seems essential to revisit also the role of decision support instruments in this light.
This article is conceptual and has therefore the typical limitations of mainly theoretical inquiries. We hope it is nevertheless helpful in critically revisiting a crucial issue—what kind of aid evaluation techniques can provide—which also has important practical implications (for instance, in addressing urban environmental problems). Further research developments could empirically detect the concrete support of specific applications of the evaluation techniques discussed. Clearly, the techniques considered here are merely a sample, and the range could be extended to include yet other techniques. New research developments could also explore the possibility of revised or new decision aid techniques.

Author Contributions

Conceptualization and methodology: S.M. Formal analysis and investigation: I.M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. House, E. Evaluating with Validity; Sage: London, UK, 1980. [Google Scholar]
  2. Palfrey, C.; Thomas, P. Politics and policy evaluation. SageJournals 1999, 14, 58–70. [Google Scholar] [CrossRef]
  3. Barthélemy, J.P.; Bisdorff, R.; Coppin, G. Human centered processes and decision support systems. Eur. J. Oper. Res. 2002, 136, 233–252. [Google Scholar] [CrossRef]
  4. Lindblom, C.E. The Science of Muddling Through. Public Adm. Rev. 1959, 19, 79–88. [Google Scholar] [CrossRef]
  5. Lindblom, C.E. The Intelligence of Democracy; The Free Press: New York, NY, USA, 1965. [Google Scholar]
  6. Radaelli, C.M.; Dente, B. Evaluation strategies and analysis of the policy process. Evaluation 1996, 2, 51–66. [Google Scholar] [CrossRef]
  7. Lichfield, N. Economics of Planned Development; Estates Gazette: London, UK, 1956. [Google Scholar]
  8. Lichfield, N. Cost-Benefit Analysis in City Planning. J. Am. Inst. Plan. 1960, 26, 273–279. [Google Scholar] [CrossRef]
  9. Lichfield, N. Cost-Benefit Analysis in Plan Evaluation. Town Plan. Rev. 1964, 35, 159–169. [Google Scholar] [CrossRef]
  10. Lichfield, N. Economics in Town Planning: A Basis for Decision Making. Town Plan. Rev. 1968, 39, 5–20. [Google Scholar] [CrossRef]
  11. Hill, M. A Goals-Achievement Matrix for Evaluating Alternative Plans. J. Am. Inst. Plan. 1968, 34, 19–29. [Google Scholar] [CrossRef]
  12. Mahmassani, H.; Krzysztofowicz, R. A Behaviorally Based Framework for Multicriteria Decisionmaking under Uncertainty in the Urban Transportation Context. Environ. Plan. B Plan. Des. 1983, 10, 193–206. [Google Scholar] [CrossRef]
  13. Hokkanen, J.; Salminen, P.; Rossi, E.; Ettala, M. The choice of a solid waste management system using the ELECTRE II decision-aid method. Waste Manag. Res. 1995, 13, 175–193. [Google Scholar] [CrossRef]
  14. Floc’hlay, B.; Plottu, E. Democratic evaluation: From empowerment evaluation to public decision-making. Evaluation 1998, 4, 261–277. [Google Scholar] [CrossRef]
  15. Lipshitz, G.; Massam, B.H. Classification of development towns in Israel by using multicriteria decision aid techniques. Environ. Plan. A 1998, 30, 1279–1294. [Google Scholar] [CrossRef]
  16. Klauer, B.; Drechsler, M.; Messner, F. Multicriteria analysis under uncertainty with IANUS—method and empirical results. Environ. Plan. C Gov. Policy 2006, 24, 235–256. [Google Scholar] [CrossRef][Green Version]
  17. Proctor, W.; Drechsler, M. Deliberative multicriteria evaluation. Environ. Plan. C Gov. Policy 2006, 24, 169–190. [Google Scholar] [CrossRef]
  18. Mrak, I. A methodological framework based on the dynamic-evolutionary view of heritage. Sustainability 2013, 5, 3992–4023. [Google Scholar] [CrossRef][Green Version]
  19. Abastante, F.; Lami, I.M. An integrated assessment framework for the requalification of districts facing urban and social decline. In Integrated Evaluation for the Management of Contemporary Cities; Green Energy and, Technology; Mondini, G., Fattinnanzi, E., Oppio, A., Bottero, M., Stanghellini, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 535–545. [Google Scholar]
  20. Nosal Hoy, K.; Solecka, K.; Szarata, A. The application of the multiple criteria decision aid to assess transport policy measures focusing on innovation. Sustainability 2019, 11, 1472. [Google Scholar] [CrossRef][Green Version]
  21. Della Spina, L. Adaptive sustainable reuse for cultural heritage: A multiple criteria decision aiding approach supporting urban development processes. Sustainability 2020, 12, 1363. [Google Scholar] [CrossRef][Green Version]
  22. Davis, J.R.; Grant, I.W. ADAPT: A knowledge-based decision support system for producing zoning schemes. Environ. Plan. B Plan. Des. 1987, 14, 53–66. [Google Scholar] [CrossRef]
  23. Janssen, R. “A Support System for Environmental Decisions”. In Evaluation Methods for Urban and Regional Plans; Shefer, D., Voogd, H., Eds.; Pion: London, UK, 1990; pp. 159–173. [Google Scholar]
  24. O’Connell, I.J.; Keller, C.P. Design of decision support for stakeholder-driven collaborative land valuation. Environ. Plan. B Plan. Des. 2002, 29, 607–628. [Google Scholar] [CrossRef][Green Version]
  25. Zopounidis, C.; Doumpos, M. Multi-criteria decision aid in financial decision making: Methodologies and literature review. J. Multi-Criteria Decis. Anal. 2002, 11, 167–186. [Google Scholar] [CrossRef]
  26. Xiang, W.N. Making better, quicker, and wiser decisions with a decision facilitating and advising system. Environ. Plan. B Plan. Des. 1996, 23, 401–419. [Google Scholar] [CrossRef]
  27. Alexander, E. “Implementing Norms in Practice. The Institutional Design of Evaluation”. In Beyond Benfit Cost Analysis; Miller, D., Patassini, P., Eds.; Ashgate: Aldershot, UK, 2005; pp. 295–310. [Google Scholar]
  28. Easton, D. The Political System; Alfred, A., Ed.; Knopf: New York, NY, USA, 1960. [Google Scholar]
  29. Rawls, J. A Theory of Justice; Harvard University Press: Harvard, MA, USA, 1971. [Google Scholar]
  30. Rosenhead, J. What’s the problem. An introduction to problem structuring methods. Interfaces 1996, 26, 117–131. [Google Scholar] [CrossRef]
  31. Lami, I.M. The context of urban renewals as a “super-wicked” problem. In New Metropolitan Perspectives; Smart Innovation Systems and, Technologies; Calabrò, F., Della Spina, L., Bevilacqua, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 100, pp. 249–255. [Google Scholar]
  32. Armando, A.; Durbiano, G. Teoria del Progetto Architettonico; Dai disegni agli affetti; Carocci: Roma, Italy, 2017. [Google Scholar]
  33. Todella, E.; Lami, I.M.; Armando, A. Experimental Use of Strategic Choice Approach (SCA) by Individuals as an Architectural Design Tool. Group Decis. Negot. 2018, 27, 811–826. [Google Scholar] [CrossRef]
  34. Tavella, E.; Lami, I.M. Negotiating perspectives and values through soft OR in the context of urban renewal. J. Oper. Res. Soc. 2019, 70, 136–161. [Google Scholar] [CrossRef]
  35. Healey, P. Urban Complexity and Spatial Strategies. Towards a Relational Planning for Our Times; Routledge: London, UK; New York, NY, USA, 2007. [Google Scholar]
  36. Mingers, J. Soft OR comes of age—but not everywhere! Omega 2011, 39, 729–741. [Google Scholar] [CrossRef]
  37. Light, D.; Wexler, D.; Heinze, J. How Practitioners Interpret and Link Data to Instruction: Research Findings on New York City Schools’ Implementation of the Grow Network; AERA: New York, NY, USA, 2004. [Google Scholar]
  38. Ackoff, R.L. From data to wisdom. J. Appl. Syst. Anal. 1989, 16, 3–9. [Google Scholar]
  39. Bernstein, J.H. The data-information-knowledge-wisdom hierarchy and its antithesis. In Proceedings of the 2nd North American Symposium on Knowledge Organization held at Syracuse University, Syracuse, NY, USA, 18–19 June 2009; Volume 2, pp. 68–75. [Google Scholar]
  40. Rowley, J. The wisdom hierarchy: Representations of the DIKW hierarchy. J. Inf. Sci. 2007, 33, 163–180. [Google Scholar] [CrossRef][Green Version]
  41. Awadnd, E.M.; Ghaziri, H.M. Knowledge Management; Pearson Education International: Upper Saddle River, NJ, USA, 2004. [Google Scholar]
  42. Chaffey, D.; Wood, S. Business Information Management: Improving Performance Using Information Systems; FT Prentice Hall: Harlow, UK, 2005. [Google Scholar]
  43. Te Brömmelstroet, M. Making Planning Support Systems Matter: Improving the Use of Planning Support Systems for Integrated Land Use and Transport Strategy-Making. Ph.D. Thesis, UvA-DARE (Digital Academic Repository), Amsterdam, The Netherlands, 2010. [Google Scholar]
  44. Geertman, S.; Stillwell, J. Planning Support Systems: Best Practice and New Methods; Springer: Dordrecht, The Netherlands, 2009. [Google Scholar]
  45. Andrienko, G.; Andrienko, N.; Jankowski, P. Geo-visual analytics for spatial decision support: Setting the research agenda. Int. J. Geogr. Inf. Sci. 2007, 21, 839–857. [Google Scholar] [CrossRef]
  46. von Winterfeldt, D. Structuring Decision Problems for Decision Analysis. Acta Psychol. 1980, 45, 71–93. [Google Scholar] [CrossRef]
  47. Bruhn Barfod, M. An MCDA approach for the selection of bike projects based on structuring and appraising activities. Eur. J. Oper. Res. 2012, 218, 810–818. [Google Scholar] [CrossRef][Green Version]
  48. von Winterfeldt, D.; Edwards, W. Defining a decision analytic structure. In Advances in Decision Analysis; Edwards, W., Miles, R.F., von Winterfeldt, D., Eds.; Cambrigde University Press: New York, NY, USA, 2007; pp. 81–103. [Google Scholar]
  49. Phillips, L. Decision conferencing. In Advances in Decision Analysis: From Foundations to Applications; Edwards, W., Miles, R., Jr., von Winterfeldt, D., Eds.; Cambridge University Press: New York, NY, USA, 2007; pp. 375–399. [Google Scholar]
  50. Lami, I.M.; Abastante, F.; Bottero, M.; Masala, E.; Pensa, S. MCDA and interactive maps: An integrated approach for supporting the evaluation of transport strategies. EURO J. Decis. Process. 2014, 2, 281–312. [Google Scholar] [CrossRef][Green Version]
  51. Comes, T.; Conrado, C.; Hiete, M.; Kameramaus, M.; Pavlin, G.; Wijngaards, N. An intelligent decision support system for decision making under uncertainty in distributed reasoning frameworks. In Proceedings of the 7th International ISCRAM Conference, Seattle, WA, USA, 2–5 May 2010. [Google Scholar]
  52. Belton, V.; Stewart, T.J. Problem structuring and multiple criteria decision analysis. In Trends in Multiple Criteria Decision Analysis; Ehrgott, M., Figueira, J.R., Greco, S., Eds.; Springer: New York, NY, USA, 2010. [Google Scholar]
  53. Keeney, R.L. Value-Focused Thinking a Path to Creative Decision Making; Harvard University Press: Cambridge, MA, USA, 1992. [Google Scholar]
  54. Keeney, R.L. Decision-Analysis – an Overview. Oper. Res. 1982, 30, 803–838. [Google Scholar] [CrossRef][Green Version]
  55. Saaty, T.L.; Shang, J.S. An innovative orders-of-magnitude approach to AHP-based multi-criteria decision making: Prioritizing divergent intangible humane acts. Eur. J. Oper. Res. 2011, 214, 703–715. [Google Scholar] [CrossRef]
  56. Spilsbury, M.J.; Norgbey, S.; Battaglino, C. Priority setting for evaluation: Developing a strategic evaluation portfolio. Eval. Program Plan. 2014, 46, 47–57. [Google Scholar] [CrossRef]
  57. Millet, I.; Saaty, T.L. On the relativity of relative measures—Accommodating both rank preservation and rank reversals in the AHP. Eur. J. Oper. Res. 2000, 121, 205–212. [Google Scholar] [CrossRef]
  58. Saaty, T.L. How to make a decision: The Analytic Hierarchy Process. Eur. J. Oper. Res. 1990, 48, 9–26. [Google Scholar] [CrossRef]
  59. Corrente, S.; Greco, S.; Słowinński, R. Multiple criteria hierarchy process in robust ordinal regression. Decis. Support Syst. 2012, 53, 660–674. [Google Scholar] [CrossRef]
  60. Kim, D.-H.; Kim, K.-J.; Park, K.S. Compromising prioritization from pairwise comparisons considering type I and II errors. Eur. J. Oper. Res. 2010, 204, 285–293. [Google Scholar] [CrossRef]
  61. Erfani, G.; Roe, M. Institutional stakeholder participation in urban redevelopment in Tehran: An evaluation of decisions and actions. Land Use Policy 2019, 91, 104–367. [Google Scholar] [CrossRef]
  62. Wallbaum, H.; Krank, S.; Teloh, R. Prioritizing Sustainability Criteria in Urban Planning Processes: Methodology Application. Urban Plan. Dev. 2011, 137, 20–28. [Google Scholar] [CrossRef]
  63. Fung, A. Putting the Public Back into Governance: The Challenges of Citizen Participation and Its Future. Public Adm. Rev. 2015, 75, 513–552. [Google Scholar] [CrossRef]
  64. Cohen, M.; Wiek, A. Identifying Misalignments between Public Participation Process and Context in Urban Development. Chall. Sustain. 2017, 5, 11–22. [Google Scholar] [CrossRef][Green Version]
  65. Abastante, F.; Lami, I.M. A stakeholders-oriented approach to analyze the case of the UNESCO’s man and biosphere reserve CollinaPo. In Values and Functions for Future Cities; Green Energy and, Technology; Mondini, G., Oppio, A., Stanghellini, F., Bottero, M., Abastante, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; pp. 325–338. [Google Scholar]
  66. Corral, S.; Monagas, M.C. Social involvement in environmental governance: The relevance of quality assurance processes in forest planning. Land Use Policy 2017, 67, 710–715. [Google Scholar] [CrossRef]
  67. Ferilli, G.; Sacco, P.L.; Tavano Blessi, G. Beyond the rhetoric of participation: New challenges and prospects for inclusive urban regeneration. City Cult. Soc. 2016, 7, 95–100. [Google Scholar] [CrossRef]
  68. de Gooyert, T.V.; Rouwette, E.; Van Kranenburg, H.; Freeman, F. Reviewing the role of stakeholders in Operational Research: A stakeholder theory perspective. Eur. J. Oper. Res. 2017, 262, 402–410. [Google Scholar] [CrossRef]
  69. Bryson, J. What to do when stakeholders matter. Public Manag. Rev. 2004, 6, 21–54. [Google Scholar] [CrossRef]
  70. Plottu, B.; Plottu, E. Approaches to participation in evaluation: Some conditions for implementation. Evaluation 2009, 15, 343–359. [Google Scholar] [CrossRef][Green Version]
  71. Rittel, H.W.J.; Webber, M.M. Dilemmas in a General Theory of Planning. Policy Sci. 1973, 4, 155–169. [Google Scholar] [CrossRef]
  72. Sutton, J. Non-Cooperative Bargaining Theory: An Introduction. Rev. Econ. Stud. 1986, 53, 709–724. [Google Scholar] [CrossRef][Green Version]
  73. Buchanan, R. Wicked Problems in Design Thinking. Des. Issues 1992, 8, 5–21. [Google Scholar] [CrossRef]
  74. Lami, I.M.; Todella, E. Facing urban uncertainty with the strategic choice approach: The introduction of disruptive events. Riv. Di Estet. 2020, 71, 222–240. [Google Scholar] [CrossRef]
  75. Fregonese, E.; Lami, I.M.; Todella, E. Aesthetic Perspectives in Group Decision and Negotiation Practice. Group Decis. Negot. 2020. [Google Scholar] [CrossRef]
  76. Friend, J.K.; Hickling, A. Planning under Pressure: The Strategic Choice Approach, 3rd ed.; Elsevier: New York, NY, USA, 2005. [Google Scholar]
  77. Mingers, J.; Rosenhead, J. Problem structuring methods in action. EURO J. Oper. Res. 2004, 152, 530–554. [Google Scholar] [CrossRef]
  78. Lami, I.M.; Tavella, E. On the usefulness of soft OR models in decision making: A comparison of Problem Structuring Methods supported and self-organized workshops. Eur. J. Oper. Res. 2019, 275, 1020–1036. [Google Scholar] [CrossRef]
  79. Damodaran, A. Valuation approaches and metrics: A survey of the theory and evidence. Found. Trends Financ. 2005, 1, 693–784. [Google Scholar] [CrossRef][Green Version]
  80. Hoesli, M.; MacGregor, B.D. Property Investment: Principles and Practice of Portfolio Management; Pearson Education: Edinburgh, UK, 2000. [Google Scholar]
  81. Greaves, M.J. Discounted cash flow techniques and current methods of income valuation. Estates Gaz. 1972, 223, 2147–2151, 2339–2345. [Google Scholar]
  82. Mishan, E.J.; Quah, E. Cost—Benefit Analysis, 5th ed.; Routdlege: London, UK; New York, NY, USA, 2007. [Google Scholar]
  83. Loomis, J.; Helfand, G. Environmental Policy Analysis for Decision Making; Kluwer Academic Publishers: New York, NY, USA, 2003. [Google Scholar]
  84. Roy, B.; Bouyssou, D. Aide Multicritère à la Decision: Methodes et Cas; Economica: Paris, France, 1993. [Google Scholar]
  85. Figueira, J.; Greco, S.; Ehrgott, M. Multiple Criteria Decision Analysis: State of the Art Surveys; Kluwer Academic Publishers: Boston, MA, USA; Dordrecht, The Netherlands; London, UK, 2005. [Google Scholar]
  86. Roy, B.; Słowiński, R. Questions guiding the choice of a multicriteria decision aiding method. Euro J. Decis. Process. 2013, 1, 69–97. [Google Scholar] [CrossRef][Green Version]
  87. Saaty, T.L. The Analytic Hierarchy Process; MacGraw-Hill: New York, NY, USA, 1980. [Google Scholar]
  88. Saaty, T.L. Fundamentals of the analytic network process—Dependence and feedback in decision-making with a single network. J. Syst. Sci. Syst. Eng. 2004, 13, 129–157. [Google Scholar] [CrossRef]
  89. Sanderson, I. Evaluation, policy learning and evidence-based policy making. Public Adm. 2002, 80, 1–22. [Google Scholar] [CrossRef]
  90. Martin, L. Incorporating values into sustainability decision-making. J. Clean. Prod. 2015, 105, 146–156. [Google Scholar] [CrossRef]
  91. Van Dooren, W.; Noordegraaf, M. Staging Science: Authoritativeness and Fragility of Models and Measurement in the Covid-19 Crisis. Public Adm. Rev. 2020. [Google Scholar] [CrossRef] [PubMed]
  92. Owens, S.; Rayner, T.; Bina, O. New agendas for appraisal: Reflections on theory, practice, and research. Environ. Plan. A 2004, 36, 1943–1959. [Google Scholar] [CrossRef][Green Version]
  93. Mazza, L. Attivista e gentiluomo? Arch. Studi Urbani E Regionali 1993, 48, 29–62. [Google Scholar]
  94. Rayner, S. Democracy in the age of assessment: Reflections on the roles of expertise and democracy in public-sector decision making. Sci. Public Policy 2003, 30, 163–170. [Google Scholar] [CrossRef]
  95. Shefer, D.; Kaess, L. Evaluation Methods in Urban and Regional Planning: Theory and Practice. In Evaluation Methods for Urban and Regional Plans; Shefer, D., Voogd, H., Eds.; Pion: London, UK, 1990; pp. 97–115. [Google Scholar]
Figure 1. Importance of Fi, Sp, Po, Ip, for Problem Structuring Methods (PSMs).
Figure 1. Importance of Fi, Sp, Po, Ip, for Problem Structuring Methods (PSMs).
Sustainability 12 08568 g001
Figure 2. Importance of Fi, Sp, Po, Ip for DCFA.
Figure 2. Importance of Fi, Sp, Po, Ip for DCFA.
Sustainability 12 08568 g002
Figure 3. Importance of Fi, Sp, Po, Ip for Cost-Benefit Analysis (CBA).
Figure 3. Importance of Fi, Sp, Po, Ip for Cost-Benefit Analysis (CBA).
Sustainability 12 08568 g003
Figure 4. Importance of Fi, Sp, Po, I,p for Multi-Criteria Decision Aid (MCDA).
Figure 4. Importance of Fi, Sp, Po, I,p for Multi-Criteria Decision Aid (MCDA).
Sustainability 12 08568 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lami, I.M.; Moroni, S. How Can I Help You? Questioning the Role of Evaluation Techniques in Democratic Decision-Making Processes. Sustainability 2020, 12, 8568. https://doi.org/10.3390/su12208568

AMA Style

Lami IM, Moroni S. How Can I Help You? Questioning the Role of Evaluation Techniques in Democratic Decision-Making Processes. Sustainability. 2020; 12(20):8568. https://doi.org/10.3390/su12208568

Chicago/Turabian Style

Lami, Isabella M., and Stefano Moroni. 2020. "How Can I Help You? Questioning the Role of Evaluation Techniques in Democratic Decision-Making Processes" Sustainability 12, no. 20: 8568. https://doi.org/10.3390/su12208568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop