Next Article in Journal
How Well Does the CWEQ II Measure Structural Empowerment? Findings from Applying Item Response Theory
Next Article in Special Issue
More Dynamic Than You Think: Hidden Aspects of Decision-Making
Previous Article in Journal
Factor Structure of Almutairi’s Critical Cultural Competence Scale
Open AccessArticle

The Impact of Formal Decision Processes on e-Government Projects

Department of Information Systems and Technology, Mid Sweden University, Holmgatan 10, Sundsvall 852 30, Sweden
Author to whom correspondence should be addressed.
Academic Editor: Cinla Akinci
Adm. Sci. 2017, 7(2), 14;
Received: 19 December 2016 / Revised: 3 May 2017 / Accepted: 11 May 2017 / Published: 22 May 2017
(This article belongs to the Special Issue Decision Making: Individual and Organisational Perspectives)


This paper studies associations between the use of formal decision-making processes in e-Government projects and the outcomes of these projects. By doing so, this study contributes to the decision sciences as well as to the fields of e-Government, information systems and public administration. Data were collected using a survey conducted among Swedish national government agencies and municipalities. Variables that have been investigated are the defining and weighting of objectives, resource allocation and assessment of whether objectives are met, as well as to what extent risk analysis was conducted. The results reveal that successful projects distinguish themselves by involving more activities related to formal decision-making procedures, especially with respect to stakeholder inclusion and weighting of objectives. These initiatives also manage more types of risks, including organizational issues. Future research should continue to explore the possible benefits of formal decision-making and risk analysis in e-Government.
Keywords: e-Government; decision-making; decision theory; public values; risk analysis; information systems; public management; public administration e-Government; decision-making; decision theory; public values; risk analysis; information systems; public management; public administration

1. Introduction

E-Government is the use of information and communication technology (ICT) in the public sector. While such initiatives are often linked with promises to transform government services and increase efficiency, research has shown that adopting ICTs for government purposes is associated with a high failure rate. The risks that lead to failure can partially be explained by the heritage from the information systems (IS) field. In a 1987 paper, Lyytinen and Hirschheim refer to two decades of IS failures as “legendary” [1]. Many of the same problems that have plagued the IS field for many years have reached the public sector, as governments increasingly implement Internet-based technology. The public sector has some unique features that may increase risks: for instance, many stakeholders are affected by the outcomes and a variety of organizations with different aims are involved in developing the related services. Moreover, the stakes are high because poor outcomes funded through tax revenues affect society’s trust of the governmental bodies behind the launch of ICT solutions and lower trust may, in turn, dissuade citizens from using e-services [2]. Budzier and Flyvbjerg (2012) suggest that organizations should establish efficient methods of decision-making to enable early detection of anomalies in e-Government projects with trajectories that might be difficult to predict [3]. Ekenberg et al. (2009) argue that government authorities should use available decision support methods in a transparent manner in order to avoid having the public “guess” the data, values and priorities on which a public decision is based [4]. Ekenberg (2015) also concludes that public decision-making is in a highly doubtful state; although new means for structured participatory processes have emerged, government entities seldom use them in decision-making processes [5]. Since unstructured and non-transparent decision processes hinder the realization of public values and citizen trust, Pardo and Burke (2008) argue that (government) leaders must understand the link between their policy decisions and the capability of governments to create the systems necessary to share information and other resources across boundaries [6]. To create seamless e-Government services, many solutions require cross-organization collaboration and interoperability between several systems.
Literature that specifically explores how and why public sector bodies make decisions regarding ICT initiatives remains somewhat limited. A study by Nielsen and Pedersen (2014) reveals that politics, intuition and coincidence play a larger role than rationality in local government decision-making concerning information technology (IT) projects [7]. Moreover, Janssen and Klievink (2011) suggest using enterprise architectures that explicitly consider risk management to improve the outcomes of e-Government development, especially in collaborative initiatives (which might be more prone to failure than single-entity initiatives) [8]. However, research on risk and risk management in relation to e-Government is sparse [9]. In a study about IT investment decisions, Bannister and Remenyi (1999) mention that only a minority of decision makers use formal approaches and that managers often rely on their “gut feelings”; as a result, they recommend that future research should lean on psychology and philosophy to achieve a greater understanding of the managerial mind [10].
Decision-making and decision makers are often mentioned in the e-Government literature. However, few studies have actually examined associations between formal decision-making and the outcomes of e-Government projects to see whether formal decision processes are a “panacea” for e-Government project success. The purpose of this paper is, therefore, to investigate associations between the use of formal decision-making processes in e-Government projects and the outcomes of these projects. By doing so, this paper contributes to the decision sciences as well as to the fields of e-Government, information systems and public administration.
The paper proceeds as follows: Section 2 outlines relevant concepts from decision theory in order to build a theoretical framework for a survey. Before the method and material is presented in Section 3, the Swedish e-Government context is described. In Section 4, the survey’s results are analyzed. Then, conclusions are drawn in Section 5, which also contains suggestions for future research.

2. Theories of Decision-Making and Values

Descriptive decision theory is concerned with how people actually make decisions, whereas prescriptive decision theory is devoted to providing assistance for better decision-making [11]. Resnik (1987) argues that knowledge about decision-making can be relevant for creating proper prescriptions concerning how decisions should be made; in other words, descriptive theories can inform prescriptive theories [12]. The underlying goal of the decision analysis field is to contribute to rational decision-making, and thus, to increase the likelihood of fulfilling objectives and acting in accordance with the decision maker’s desires and values.
A well-framed decision situation can be exemplified by a game of roulette, which contains the basic components of a decision problem: objectives (win money) and preference (to win the most money possible), alternatives (different ways to bet or not to bet at all), uncertainties (the outcome is not known) and consequences (losses, wins). The objective, which is usually profit, can be assessed by counting a player’s winnings or losses. He or she has a number of alternatives, all of which come with a degree of probabilistic uncertainty: a lower chance of winning carries consequences that are potentially more valuable, but also a larger risk of negative consequences. However, decision problems are often more complex and less well-framed than roulette, and humans are often driven by a larger spectrum of values than this example provides [13].
Even though decision makers might strive for rationality, Simon (1955) coined the term “bounded rationality”, which implies that decision makers can only be rational up to a certain point [14]. Cohen et al. (1972) describe bounded rationality as being that often a decision maker does not possess all the information required about a problem and thus cannot see all of the available alternatives. A decision might sometimes even be disconnected from the actual decision problem, with choices being made only when the organizational context allows for action [15]. Simon (1975) also differentiates between “substantive rationality,” which refers to rational choices, and “procedural rationality”, which refers to well-structured decision processes [16].
In this paper, we focus on procedural rationality. Studies of the private sector suggest that the degree of rationality in decision-making depends on the environmental context, where factors such as competitiveness, uncertainty and high external control might reduce managerial discretion [17]. Furthermore, a longitudinal field study by Dean and Sharfman (1996) reveals that managers who apply a high degree of procedural rationality in strategic decision-making generally make better decisions [18].
However, few such studies have been conducted in relation to the public sector. Andersson et al. (2012) investigate the challenges of implementing decision support systems (DSS) in a political context and conclude that several issues affect the outcomes, including a lack of impact on the final decision. The attitude among some of the decision makers in the study was that the political decision process could not be a subject for science; as such, they did not use the DSS results when making their final decisions [19].
In the current study, we use the components of a decision problem as the themes for a survey (see Section 3). These themes, which are discussed below, are (a) values and objectives (including weighting and resource allocation); (b) risk analysis; and (c) assessment of projects outcomes. The themes are merged with public value theory from e-Government. This merging constitutes the paper’s theoretical foundation.

2.1. Values and Objectives

The values of a decision maker and a decision’s objectives are interrelated. Keeney (1992 and 1996) asserts that since values are fundamental to everything we do, they should be the driving force in decision-making. Focusing on values has the potential to create not only better alternatives, but also a better decision-making situation [20,21]. Every decision situation is generally accompanied by a set of objectives that are unique up to a given context granularity. Moreover, many situations are characterized by multiple, and sometimes conflicting, objectives [22]. Since strategic decision-making can be viewed as allocating limited resources in order to achieve objectives, knowing about a decision maker’s values and objectives is of great importance in resource allocation management [23,24]. In the presence of conflicting objectives, planning for resource allocation and activities requires objectives to be associated with proper weights. Values can be structured into hierarchies and divided into means and ends. End values are fundamental strategic objectives that are supposed to guide all decisions. Such value hierarchies and alternatives can be generated by including relevant stakeholders early and throughout the decision-making process [20,21,25].
In the private sector, profit is the ultimate objective. In the public sector, managers need to display skills in balancing competing objectives in a transparent way. Whereas private sector managers are accountable to shareholders, public sector managers need to explain their motives for making decisions about the use of a nation’s fiscal funds. A major challenge for the public sector compared to private companies is value diversity. While the end value for a business is profit and efficiency, the public sector needs to balance demands for both efficiency as well as democratic values such as transparency [26]. A large strand of literature mentions a specific range of values that are unique to the public sector and e-Government context, namely public values. While the definition of public values is debated in the literature, it can nonetheless be defined as a normative consensus about issues that either concern citizens (e.g., rights, benefits, prerogatives, obligations or principles), government and policies [27] or a behavior that a majority of a population considers to be “right” [28]. Rose et al. (2015) use a tri-fold classification of public values: administrative efficiency, which focuses on value for money, productivity and performance; service improvement, which targets improving citizens’ experience of government by making services more accessible; and citizen engagement, which aims to empower citizens as participative collaborators in decision-making. Their study revealed that local authority managers in Denmark tend to prioritize values associated with administrative efficiency, while citizen engagement values were less common [29]. Riabacke et al. (2011) suggest that using formal decision models, such as multi-criteria decision analysis, would be beneficial for structuring large-scale public decisions that feature conflicting objectives [30]. Moreover, Sundberg (2016) recommends utilizing public values as the ultimate e-Government objectives [31].

2.2. Risk Analysis

A complex decision-making situation might involve a high degree of uncertainty which will affect the consequences [22]. Decision-making and risk analysis are tightly connected, since risks can be treated as decisions under uncertainty. Risk analysis includes methods for identifying, assessing and mitigating negative consequences from uncertainties. Ekenberg (2015) notes that few decision support tools have incorporated risk analysis [5].
In mature e-Government settings, complexity is increased when multiple systems and actors are supposed to interoperate and collaborate. As mentioned in Section 1, research on risk analysis in relation to e-Government is sparse. However, the literature that does exist often mentions challenges, barriers and other factors that prevent successful outcomes. In this paper, four areas that can create uncertainties in e-Government projects were extracted from studies [8,32,33,34], namely:
  • Technological factors (e.g., standards, interoperability, privacy and security).
  • Political factors (e.g., governance, common objectives).
  • Organizational and institutional factors (e.g., government reform, processes and management).
  • Legal and regulatory factors (e.g., policy-making, privacy issues).
In addition to the challenges created by new technology, several issues associated with government structures also need to be considered when examining the e-Government context.

2.3. Assessment of Outcomes

The consequences, or outcomes, of decisions and uncertainties determine whether an initiative has been successful or not, given that they are assessed. However, attaching hard numbers to soft values is not always easy. This is especially true in relation to IS assessment, which has long been a topic in the literature. Lyytinen and Hirschheim (1987) state that: failure assessment only makes sense if values, and how they enter into an IS, are clearly understood. Expectations regarding the potential value of an IS can stem from both a system’s stakeholders and its managers. One problem with the latter is that a system can be seen as a success in managerial terms, but not in a wide context: for instance, a project can be successful in terms of being delivered on time and within budget but still be perceived as a failure among its users [1].
Fitzgerald (1998) suggests applying a multi-dimensional approach to IS evaluation. In addition to trying to estimate costs as far as possible, other benefits, including unexpected “second-order effects” should be assessed by involving the right types of stakeholders. Furthermore, an IS’ contribution to strategy and long-term plans must be taken into account, as well as possible variations in the outcomes, which constitutes risk management. Moreover, decisions based on the evaluation need to be justified by quantifying and weighting the different dimensions [35]. While the work of Fitzgerald represents one IS evaluation option, Irani et al. (2008) conclude that e-Government evaluation is an underdeveloped area that needs improvement in order to fulfill the promises of transformational government [36]. Bertot et al. (2010) argue that while the use of ICTs has the potential to promote values such as transparency, such initiatives must be accompanied by evaluation criteria for measurement and verification [37].

2.4. The Swedish E-Government Case

Sweden has a tradition of independent government agencies that dates back to the constitution of 1634; today, these agencies constitute the state’s administration. When regional and local levels are taken into account, the number of independently managed agencies amounts to approximately 550.
The Swedish government has set up initiatives to evaluate the efficiency of the digitization of the country’s public sector. The total IT costs for the government are estimated at SEK 46.5 billion per year [38].
Examining official publications reveals more about the Swedish government and its efficiency, which is helpful for understanding the context in which the country’s e-Government is operating. A strategic document from Sweden’s central government outlines three objectives [39]: an easier everyday life for citizens; an open government administration that supports innovation and participation; and higher quality and more efficient government services.
A report from the Swedish National Audit Office [40] states that government agencies are not aligned with the objectives of central government and that the necessary institutional conditions needed for government agencies to reach their goals are absent. The report concludes that much untapped potential exists in relation to current e-services and that governance of the Swedish e-Government is characterized as short term, delegated and without holistic responsibility, which has led to low cost-efficiency.
The Swedish National Financial Management Authority reports that only 17% of the investigated government agencies have a method for benefit realization that they frequently use [41].

3. Materials and Methods

The data used in this study were gathered from a survey administered to Swedish government agencies. The statistical population consists of national government agencies (n = 240), municipalities (n = 290) and county councils (n = 20). The national agencies and municipalities were chosen due to volume and a somewhat similar population size. However, the county councils, which are in the process of being converted into larger regions, likely have a population that is too low to conduct any statistical analysis on.
Official e-mail addresses for all government bodies were gathered from the register of Statistics Sweden [42] and the Swedish Association of Local Authorities and Regions. A link to the survey was then sent to these addresses, with the suggestion that a project leader, coordinator or IT strategist should answer it.
The respondents were ensured confidentiality when they answered the survey. They were asked to fill out what role they had and a short description of the initiative. The latter was necessary to avoid duplicate projects in the data since many e-Government initiatives are developed in collaboration between several agencies.
Keeney (2004) argues that most decision problems are “no-brainers” while only a few decisions are complex enough to be subjects for thorough (decision) analysis [43]. To focus on complex and large-scale initiatives, a lower budget limit of SEK 1 million was set in the survey.
The survey questions were formulated based on the literature with respect to decision-making and e-Government, as discussed in Section 2. The corresponding options were based on the concepts addressed in the same section. The survey was created in Google Forms and open for responses for two weeks.
Free text answers were considered short enough to be treated manually and no extensive content analysis was conducted. The majority of the material was quantitative and suitable for statistical analysis. In a few cases, where respondents used free text instead of a fixed option, the answer was judged to be ambiguous and removed as missing data.
A follow-up survey was sent to the same population after the first results had been summarized. The purpose of the follow-up survey was to collect more data from projects that had not reached their objectives, so as to compare them with the initiatives that had successful outcomes.
The survey can be found in Appendix A.

Fisher’s Exact Test

To compare groups of projects and variables, contingency tables were created and Fisher’s exact test (two-tail, summing small p-values) was applied, as recommended for smaller samples [44]. p-values less than 0.05 were determined to be statistically significant. Fisher’s exact test was applied to 2 × 2 contingency tables with the assumption of a null hypothesis that no association exists between the rows and columns. The data were treated as independent.
When calculating P for the example in Table 1, Equation (1) was used.
P =   ( a + b ) ! ( c + d ) ! ( a + c ) ! ( b + d ) ( a + b + c + d ) ! a ! b ! c ! d
The projects were divided in two ways: into groups based on their outcomes (e.g., success/failure), and into categories depending on the variables (e.g., using a decision model—yes/no). For example, a in Table 1 represents the projects in Group 1 that correspond to Category 1. Applying Equation (1) provided a one-tailed p-value. To obtain a two-tailed p-value, all variations in Table 1 that resulted in probabilities lower than the one we obtained in Equation 1 were added to the one-tailed p-value, which gave us the final value of P.

4. Results and Analysis

A total of 56 respondents are included in the results. Most of the respondents are project leaders, IT-managers and strategists, as well as members of steering groups. They represent projects with a budget larger than SEK 1 million. The outcomes of the 56 samples are distributed as shown in Table 2. In addition to projects that achieved their outcomes, either fully (n = 10) or partially (n = 15), and projects that did not achieve them (n = 17), the results also consist of a fourth group with no ambition of assessing the results (n = 14).
The results from the survey were used as categories for comparing the groups. In Table 3, the categories have been given names and numbers based on the questions and their alternative answers in the survey (Appendix A).
Five answers were removed from question 10 and three from question 11 due to difficulties with interpreting free text data. Risk categories in the “No” group were divided by 13 since one respondent answered that risk analysis was performed, but that he or she did not know what type of risks were managed.

4.1. General Findings

Thirty-nine respondents claimed that a decision method or model was utilized during the development. On the follow-up question about the name of the model, many respondents referred to project models: ten established project models (PM3, Pejl, VGR, XLPM, Visaren, Wenell, PPS, Projektil, IPMA, PROPS) and several other models that were unique to the organization were identified in the material. Methods from the decision theory literature were lacking. Respondents who claimed that no decision model was used (n = 17) often referred to project organization to describe how decisions were made (n = 11): e.g., by project leaders, project groups or steering groups.
Service improvement and administrative efficiency were the most common objectives in the material. Citizen empowerment values were less common (which was also the case in [29]). Risk analysis was the activity being performed most often (82%), while resource allocation based on objectives was the least pursued activity (43%). In the projects where risk analysis was conducted, technological and project specific risks were the most common types. Political risks were less common. The free text data suggested that the most common way of performing risk analysis was as integrated activities within project models.

4.2. Specific Findings

To compare successful and failed projects, two groups were created (Table 4): Initiatives that had reached their objectives, either fully or partially, were merged into the SPS (Successful and Partially Successful Projects) group (n = 25). The other group, FAI (Failed Projects), consists of the initiatives that did not reach their objectives (n = 17).
As Table 5 demonstrates, the SPS group had a comparatively high degree of stakeholder inclusion when determining objectives and values (P = 0.029).
As shown in Table 6, weighting of objectives was more common in the SPS group than in the FAI group (P = 0.0073).
Finally, the SPS group focused on organizational risks to a larger extent than the FAI group, as demonstrated in Table 7 (P = 0.016).
It is also worth noting that the projects in the SPS group that conducted risk analysis also managed more types of risks on average (4.3) compared to the FAI group (3.2).
Finally, something should be mentioned about the group of non-assessed projects. These initiatives distinguished themselves from the SPS group by the following parameters: they focused less on administrative efficiency (P = 0.0002) and they involved stakeholders when setting objectives to a lesser extent (P = 0.0013). The non-assessed projects also undertook less activities overall related to formal decision-making activities (weighting AND resource allocation AND risk analysis) (P = 0.009) than the SPS group. More research is needed to follow up on the trajectories of these projects since their outcomes are unknown.

5. Conclusions and Further Research

This study has investigated associations between the use of formal decision-making processes and e-Government project outcomes by administering a survey to Swedish national government agencies and municipalities. The results reveal that successful projects distinguished themselves by having a more formalized decision-making process, especially when it came to stakeholder inclusion and weighting of objectives. These projects also managed organizational risks to a larger extent than projects that did not reach their objectives. Hence, we cannot discard the idea that formal decision processes are actually a panacea for successful e-Government initiatives. Several non-assessed projects with unknown outcomes were also found in the data. The trajectories of these initiatives need to be studied further.
Respondents did not mention formal methods for decision-making, although they did refer to several project models. As a result, we can draw the conclusion that, while some activities related to formal decision-making are utilized in the studied context, generic ideas from prescriptive decision theories, such as value-focused thinking, have not yet reached e-Government. What is undertaken in e-Government decision-making is project management at best, represented by a plethora of different project models that serve as bounding conditions for procedural rationality. If the objectives of e-Government are to deliver open, efficient services and make citizens’ lives easier, the question must be asked why those responsible have not adopted more formal approaches that would facilitate a broader scope of public values being realized and evaluated within the projects.
In order to properly operationalize objectives and realize the potential values of technology in the public sector, we see a great opportunity for researchers to continue investigating decision-making in the e-Government context—both in theory and practice. Such studies would allow for further exploration and expansion of the boundaries of rationality in the context, beyond project management. Furthermore, by merging public value theory with concepts from decision theory, it is possible to further develop the theoretical base of e-Government. The approach used in this paper can be applied in other national settings as well, which would allow for international comparisons.
Some of the limitations of the study can be traced back to definitions, while others are related to the used methodology. E-Government was not defined in the survey, although e-services were mentioned as an example. Citizens, entrepreneurs and employees were suggested as examples of relevant stakeholders.
A formalized decision process involves a structured and systematic method, often accompanied by a model enabling or facilitating the use of the method. From the decision maker’s perspective, distinguishing between these two concepts is a less relevant issue. Since the purpose of this study was to identify the use of formal decision processes, the concepts were not distinguished in the survey.
In the analysis, groups and categories were subject to the authors’ interpretation. Since the second survey specifically targeted initiatives that had not reached their objectives, the data do not represent the distribution of success and failures among the whole population.


This work is a performed within the Risk and Crisis Research Centre at Mid Sweden University. The authors would like to thank the participants at The Scandinavian Workshop on e-Government 2017 for feedback on the paper.

Author Contributions

Leif Sundberg performed an initial literature review, designed the survey and analyzed the data. Aron Larsson performed a complementary literature review, and assisted in designing the survey.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A: Survey Design

(MA = multiple answers possible).
Type of government agency. Municipality/National agency.
Has your agency participated in e-Government projects with a budget larger than SEK 1 million? Yes/No (if No, the survey ends). If the agency participates in multiple projects, the respondent was asked to choose the largest.
Describe the project briefly.
What role have you had in the project? Project leader/Coordinator/IT strategist/Other.
Has any method/model for decision-making been used in the project? Yes (go to 6)/No (go to 7).
What method/model was used?
Describe how decisions have been made in the project.
Values and Objectives
What values and objectives have been set in the project? Administrative efficiency/Service improvement/Citizen engagement (MA).
How have these values and objectives been set? Based on governance and documents from central government/Based on input from relevant stakeholders/Internally, based on the project members’ experience/Other (MA).
How have these values and objectives been weighted? Economical weighting/No weighting/Based on input from relevant stakeholders/Other.
Have resources been allocated in relation to these values and objectives? No/Yes, by using largely external resources/Yes, primarily by using internal resources.
Is there any assessment of values and objectives within the project? Yes/No.
Were the values and objectives fulfilled? Yes/No/Partially/Unknown/Ongoing project/Other.
Risk Analysis
Has any risk analysis been performed? Yes (go to 17)/No (survey ends).
How has risk analysis been performed?
What type of risks have been discussed? Technological/Political/Organizational/Budget and deadline/Legal/Other.


  1. Lyytinen, K., and R. Hirschheim. 1987. Information systems failures—A survey and classification of the empirical literature. In Oxford Surveys in Information Technology. Oxford, UK: Oxford University Press, Volume 4, pp. 257–309. [Google Scholar]
  2. Bélanger, F., and L. Carter. 2008. Trust and risk in e-government adoption. J. Strateg. Inf. Syst. 17: 165–176. [Google Scholar] [CrossRef]
  3. Budzier, A., and B. Flyvbjerg. 2012. Overspend? Late? Failure? What the Data Say about IT Project Risk in the Public Sector. In Commonwealth Governance Handbook. London, UK: Democracy, Development and Public Administration, Commonwealth Secretariat. [Google Scholar]
  4. Ekenberg, L., A. Larsson, J. Idefeldt, and S. Bohman. 2009. The lack of transparency in public decision processes. Int. J. Public Inf. Syst. 1: 1–8. [Google Scholar]
  5. Ekenberg, L. 2015. Public Participatory Decision Making. In Intelligent Software Methodologies, Tools and Techniques, Volume 315 in the Series Communications in Computer and Information Science. Cham, Switzerland: Springer, pp. 3–12. [Google Scholar]
  6. Pardo, A.T., and B.G. Burke. 2008. Government Worth Having: A Briefing on Interoperability for Government Leaders. New York, NY, USA: Center for Technology in Government, University at Albany. [Google Scholar]
  7. Nielsen, J.A., and K. Pedersen. 2014. IT portfolio decision-making in local governments: Rationality, politics, intuition and coincidences. Gov. Inf. Q. 31: 411–420. [Google Scholar] [CrossRef]
  8. Janssen, M., and B. Klievink. 2012. Can enterprise architectures reduce failure in development projects? Transform. People Proces. Polic. 6: 27–40. [Google Scholar] [CrossRef]
  9. Røberg, P.M., L.S. Flak, and P. Myrseth. Unveiling Barriers and Enablers of Risk Management in Interoperability Efforts. In Proceedings of the 47th Hawaii International Conference on System Science, Waikoloa, HI, USA, 6–9 January 2014. [Google Scholar]
  10. Bannister, F., and D. Remenyi. 1999. Inistinct and Value in IT Investment Decisions. Occasional Paper Series; Wolverhampton, UK: University of Wolverhampton. [Google Scholar]
  11. Keeney, R.L. 1992. On the foundations of prescriptive decision analysis. Utility Theories: Measurement and Applications. Edited by W. Edwards. Boston, MA, USA: Kluver Academic Publisher. [Google Scholar]
  12. Resnik, M.D. 1987. Choices: An Introduction to Decision Theory. London, UK: University of Minnesota Press. [Google Scholar]
  13. Webler, T., O. Renn, C. Jaeger, and E. Rosa. 2001. The Rational Actor Paradigm in Risk Theories: Analysis and Critique. In Risk in the Modern Age: Social Theory, Science, and Environmental Decision Making. New York, NY, USA: MacMillan, pp. 35–61. [Google Scholar]
  14. Simon, H. 1955. A Behavioral Model of Rational Choice. Q. J. Econ. 69: 99–118. [Google Scholar] [CrossRef]
  15. Cohen, M., J. March, and J. Olsen. 1972. A Garbage Can Model of Organizational Choice. Adm. Sci. Q. 17: 1–25. [Google Scholar] [CrossRef]
  16. Simon, H. 1976. From substantive to procedural rationality. In 25 Years of Economic Theory. New York, NY, USA: Springer. [Google Scholar]
  17. Dean, W.D., Jr., and M.P. Sharfman. 1993. Procedural Rationality in the Strategic Decision-Making Process. J. Manag. Stud. 30: 587–610. [Google Scholar] [CrossRef]
  18. Dean, W.D., Jr., and M.P. Sharfman. 1996. Does Decision Process Matter? A Study of Strategic Decision-Making Effectiveness. Acad. Manag. J. 39: 368–396. [Google Scholar] [CrossRef]
  19. Andersson, A., Å. Grönlund, and J. Åström. 2012. You can’t make this a science! Analyzing decision support systems in political contexts. Gov. Inf. Q. 29: 543–552. [Google Scholar] [CrossRef]
  20. Keeney, R.L. 1992. Value Focused Thinking: A Path to Creative Decisionmaking. Cambridge, MA, USA: Harvard University Press. [Google Scholar]
  21. Keeney, R.L. 1996. Value-focused thinking: identifying decision opportunities and creating alternatives. Eur. J. Oper. Res. 92: 537–549. [Google Scholar] [CrossRef]
  22. Eisenführ, F., M. Weber, and T. Langer. 2010. Rational Decision Making. Berlin/Heidelberg, Germany: Springer-Verlag. [Google Scholar]
  23. Kleinmuntz, D. 2007. Resource allocation decisions. In Advances in Decision Sciences. Edited by W. Edwards, Jr. Miles and D. von Winterfeldt. Cambridge, UK: Cambridge University Press. [Google Scholar]
  24. Phillips, L.D., and C.A. Bana e Costa. 2007. Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann. Oper. Res. 154: 51–68. [Google Scholar] [CrossRef]
  25. Von Winterfeldt, D., and W. Edwards. 1986. Decision Analysis and Behavioral Research. Cambridge, UK: Cambridge University Press. [Google Scholar]
  26. Halachmi, A., and D. Greiling. 2013. Transparency, E-Government, and Accountability. Public Perform. Manag. Rev. 36: 562–584. [Google Scholar] [CrossRef]
  27. Bozeman, B. 2009. Public values theory: Three big questions. Int. J. Public Polic. 4: 369–375. [Google Scholar] [CrossRef]
  28. Bannister, F., and R. Connolly. 2014. ICT, Public Values and Transformative Government: A Framework and Programme for Research. Gov. Inf. Q. 31: 119–128. [Google Scholar] [CrossRef]
  29. Rose, J., J.S. Persson, and L.T. Heeager. 2015. How e-Government managers prioritise rival value positions: The efficiency imperative. Inf. Polit. 20: 35–59. [Google Scholar] [CrossRef]
  30. Riabacke, M., J. Åström, and Å. Grönlund. 2011. Eparticipation Galore?—Extending Multi-Criteria Decision Analysis to the Public. Int. J. Public Inf. Syst. 7: 79–99. [Google Scholar]
  31. Sundberg, L. 2016. Risk and decision making in collaborative e-Government: An objectives-oriented approach. Electron. J. Electron. Gov. 14: 36–47. [Google Scholar]
  32. Garcia, J.R., and T.A. Pardo. 2005. E-government success factors: Mapping practical tools to theoretical foundations. Gov. Inf. Q. 22: 187–216. [Google Scholar] [CrossRef]
  33. Lam, W. 2005. Barriers to e-government integration. J. Enterp. Inf. Manag. 18: 511–530. [Google Scholar] [CrossRef]
  34. Müller, S.D., and S. Abildgaaard. 2015. Success factors influencing implementation of e-government at different stages of maturity: A literature review. Int. J. Electron. Gov. 7: 136–170. [Google Scholar] [CrossRef]
  35. Fitzgerald, G. 1998. Evaluating information systems projects: a multidimensional approach. J. Inf. Technol. 13: 15–27. [Google Scholar] [CrossRef]
  36. Irani, Z., P.E.D. Love, and S. Jones. 2008. Learning lessons from evaluating eGovernment: Reflective case experiences that support transformational government. J. Strateg. Inf. Syst. 17: 155–164. [Google Scholar] [CrossRef]
  37. Bertot, J.C., P.T. Jaeger, and J.M. Grimes. 2010. Using ICTs to create a culture of transparency: E-government and social media as openness and anti-corruption tools for societies. Gov. Inf. Q. 27: 264–271. [Google Scholar] [CrossRef]
  38. E-delegationen. 2012. Effektiv IT-Drift inom Staten—En Förstudie. Stockholm, Sweden: E-delegationen. [Google Scholar]
  39. Regeringskansliet. Med Medborgaren i Centrum: Regeringens Strategi för en Digitalt Samverkande Statsförvaltning. Näringsdepartementet, 2012. Available online: (accessed on 18 May 2017).
  40. Riksrevisionen. Den Offentliga Förvaltningens Digitalisering—En Enklare, öppnare Och Effektivare Förvaltning: En Granskningsrapport Från Riksrevisionen, RIR 2016:14. Available online: (accessed on 18 May 2017).
  41. Ekonomistyrningsverket. 2015. Rapport: Fördjupat It-Kostnadsuppdrag. Delrapport 2: Kartläggning av It-Kostnader. Available online: (accessed on 18 May 2017).
  42. SCB. Myndighetsregister. Available online: (accessed on 18 May 2017).
  43. Keeney, R.L. 2004. Making Better Decision Makers. Decis. Anal. 1: 193–204. [Google Scholar] [CrossRef]
  44. Freeman, J.V., and M.J. Campbell. 2007. The analysis of categorical data. Scope 16: 18–21. [Google Scholar]
Table 1. Contingency table.
Table 1. Contingency table.
Category 1Category 2
Group 1Ac
Group 2Bd
Table 2. Survey samples.
Table 2. Survey samples.
Objectives Achieved?YesPartiallyNoNon-AssessedTotal
National agency5810932
Table 3. Results for all groups.
Table 3. Results for all groups.
Objectives Fulfilled?/CategoryYesPartiallyNoNon-AssessedTotal
5 Decision model, yes8 (80%)12 (80%)11 (65%)8 (57%)39 (70%)
5 Decision model, no2 (20%)3 (20%)6 (35%)6 (43%)17 (30%)
Value classification
8 Administrative efficiency10 (100%)15 (100%)15 (88%)7 (50%)47 (84%)
8 Service improvement9 (90%)15 (100%)14 (82%)12 (86%)50 (89%)
8 Citizen engagement6 (60%)3 (20%)3 (18%)2 (14%)14 (25%)
8 Other02 (13%)1 (6%)1 (7%)4 (7%)
Values derived from
9 Central government1 (10%)8 (53%)6 (35%)8 (57%)23 (41%)
9 Stakeholders8 (80%)14 (93%)9 (53%)5 (36%)36 (64%)
9 Internally4 (40%)12 (80%)9 (53%)6 (43%)31 (55%)
9 Other2 (20%)1 (7%)1 (6%)2 (14%)6 (11%)
Weighting/Resource allocation
10 Weighting, yes8 (100%)13 (87%)8 (47%)8 (67%)37 (73%)
10 Weighting, no02 (13%)8 (53%)4 (33%)14 (27%)
11 Resource allocation, yes7 (78%)9 (64%)7 (44%)7 (54%)30 (57%)
11 Resource allocation, no2 (12%)5 (36%)10 (66%)6 (66%)23 (43%)
Risk analysis
14 Risk analysis, yes10 (100%)13 (87%)14 (88%)9 (64%)46 (82%)
14 Risk analysis, no02 (13%)3 (12%)5 (36%)10 (18%)
Risk categories
Technological10 (100%)11 (85%)12 (92%)9 (100%)43 (96%)
Political4 (40%)9 (69%)3 (23%)4 (44%)20 (44%)
Organizational10 (100%)12 (92%)8 (62%)7 (78%)37 (82%)
Legal and regulatory9 (90%)9 (69%)7 (58%)7 (78%)32 (71%)
Project specific10 (100%)11 (85%)11 (85%)9 (100%)41 (91%)
Other1 (10%)2 (15%)1 (8%)04 (9%)
Number of risks on average (for initiatives conducting risk analysis)4.4 4.2 3.243.8
Weighting AND resource allocation AND risk analysis, yes665118
Weighting AND resource allocation AND risk analysis, no27111131
Table 4. Groups.
Table 4. Groups.
Successful + Partially Successful Projects (SPS)25
Failed projects (FAI)17
Table 5. Stakeholder-based objectives setting.
Table 5. Stakeholder-based objectives setting.
Table 6. Weighting.
Table 6. Weighting.
Table 7. Organizational risks.
Table 7. Organizational risks.
Organizational RisksYesNo
Back to TopTop