Next Article in Journal
Prediction of Rainfall in Australia Using Machine Learning
Previous Article in Journal
Knowledge Graphs: A Practical Review of the Research Landscape
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Public Services Considering the Expectations of Users—A Systematic Literature Review

by
Vítor G. de Menezes
1,
Glauco V. Pedrosa
1,2,*,
Marcos P. P. da Silva
2 and
Rejane M. da C. Figueiredo
1,2
1
Information Technology Research and Application Center (ITRAC), University of Brasilia (UnB), Brasilia 72444-240, Brazil
2
Post-Graduate Program in Applied Computing, University of Brasilia (UnB), Brasilia 70910-900, Brazil
*
Author to whom correspondence should be addressed.
Information 2022, 13(4), 162; https://doi.org/10.3390/info13040162
Submission received: 10 February 2022 / Revised: 7 March 2022 / Accepted: 16 March 2022 / Published: 24 March 2022

Abstract

:
Evaluating public services has become an important task in order to direct actions that may positively affect with the quality of the service provided by governments. To undertake an effective evaluation, it is necessary to analyze data and information on the impact of the services. A wide range of studies have been proposed to measure how an organization delivers its services according to the expectations of the stakeholders. This paper investigates approaches for evaluating public services from the perspective of users. The goal is to identify and describe evaluation-based models, as well as the instruments and tools employed in the service evaluation process. A systematic literature review was conducted to search and analyze studies published in the last 15 years. From the analysis of 31 studies, we identified four main dimensions regarding service evaluation: quality, success and acceptance of information systems, user satisfaction, and user experience. This work contributes to the identification of models, dimensions, instruments, and tools to evaluate public services from the perspective of users. The results of this work can be used as a guide to the Public Administration in the construction of effective models to evaluate their public services and to guarantee quality standards that meet the expectations of the users.

1. Introduction

The relationship between governments and users has changed in recent decades because of digital transformation. The use of technologies not only improves the performance of the service but also increases the service’s reach [1]. The need to assess the effectiveness of public services arises as governments increase the development of online systems to deliver public services [2]. Such evaluation efforts can enable government organizations to determine if they are capable of doing the task required and delivering services as expected. In this context, performance based models emerge to evaluate the effectiveness and efficiency of public services, in order to make them closer and more acceptable to users [3,4].
A great challenge to evaluate the performance of public services is to obtain valid data to guide the Public Administration with direct actions that may positively interfere in the quality of the service. The first models for service evaluation used to consider organizational contexts and dimensions that could impact the quality of a service, such as empathy and courtesy [5,6,7]. With technological evolution, the perception of the user about a service changed. The previous evaluation models also evolved and started to consider new dimensions, such as usability [8,9].
Evaluating public services from the perspective of the user became an opportunity to detect which factors hinder or facilitate their impact on society. However, there is no single unitary indicator in the literature to assess the effectiveness of public services [10]. For this reason, a mapping study of the main differences between evaluation-based models and their applications would be an excellent guide for the Public Administration to implement an effective evaluating model for its services.
The goal of this work is to investigate approaches related to the evaluation of public services from the perspective of the user. The proposal is to identify the most used models, tools and dimensions in the evaluation of public services. For this, a systematic literature review (SLR) was defined and conducted, following the guidelines proposed by [11,12]. After conducting the SLR research protocol, 31 papers related to the goal of this work were identified and selected. Their models are characterized by their approaches as well as their evaluation dimensions, instruments, and tools. A significant contribution of this work is the mapping of several works in a succinct meta-analysis, allowing the manager or service evaluator to easily identify the most widely used dimensions in the literature considering the expectation of the users.
This paper is organized as follows. In Section 2, the research methodology is described detailing the steps of the SLR process. The results of the SLR conducted are presented in Section 3, which explains the service evaluation approaches; in Section 4, the service evaluation dimensions are analyzed; and Section 5 presents the models, instruments, and tools of service evaluation employed by the works studied. A discussion on the relevant findings and their implications is presented in Section 6. Finally, Section 7 presents the final considerations.

2. Research Design

In this work, we conducted a systematic literature review (SLR) to identify, select, and critically appraise researches related to the evaluation of public services from the perspective of the user. The SLR technique aims to identify, evaluate, interpret, and aggregate research results to provide an objective summary of evidence related to a research question [13,14]. Our SLR followed the guidelines proposed by [11], who defined a research protocol characterized by three phases:
  • Planning: objective and research questions are defined, as well as the search strategy, selection of the databases, and the criteria for inclusion and exclusion;
  • Conduction: execution of the search string in the selected databases, followed by the application of the selection criteria and the data analysis;
  • Report: report the execution of the research and the analysis of the results obtained.

2.1. Planning Phase

Research questions guide the execution of the SLR [14]. They consist of one main question (MQ) and derived secondary questions (SQ). Table 1 presents the research questions defined for this work.
The research protocol followed in this SLR is composed of the development of the search expression, the selection of scientific databases, the delimitation of the coverage period, and the definition of inclusion and exclusion criteria. To assist in the elaboration of the search expression, we used the PICOC approach [15]. The acronym PICOC stands for five elements: Population, Intervention, Comparison, Outcome, and Context. Table 2 shows the search terms obtained from the PICOC analysis.
To select the relevant studies, we defined some inclusion (IC) and exclusion criteria (EC), which are presented in Table 3 and Table 4. Besides the inclusion and exclusion criteria, three other filters were also defined to select the most relevant studies to compose the SLR analysis. Figure 1 shows these filters.

2.2. Conduction Phase

Based on the PICOC analysis, the keywords for the search expression was defined as: –TITLE-ABS-KEY((“service evaluation” OR “service quality” OR “service assessment”) AND ((“citizen perspective” OR “citizen satisfaction”) OR (“user perspective” OR “user satisfaction”)) AND (“model” OR “tool” OR “technique” OR “dimension” OR “approach”) AND (“government”)).
Three databases (Scopus, ACM Digital Library, and IEEE Xplore) were selected to run the search expression. They were chosen because of their advanced search engines, which allow the use of complex Boolean expressions, as well as several filters to refine the results retrieved. Scopus, for example, also indexes publications from different sources, including Springer Link [16]. The search expression was applied in each database, which in total returned 113 articles, as presented in Table 5.
Next, the coverage period was defined: at first, only papers published in the last 10 years would be considered in the research; however, as the bibliometry results showed only a few papers had been published before that timespan, we decided to include all papers found in the databases. Then, the type of works to be considered in the review was defined: Only conference and journal papers were included. If a paper was presented at a conference and later published in a journal, the latter was included.
Figure 2 shows the full process of paper selection. After applying Filter 1, we identified 14 duplicate articles (12.3%), which were excluded. There were also four records duplicated which were also discarded. After the application of Filter 1, 95 articles remained. Filter 2 consisted of reading the abstracts of the publications, considering the inclusion and exclusion criteria shown in Table 3 and Table 4. Of the 95 articles analyzed, 40 (42.1%) met the inclusion criteria. Filter 3 consisted of the complete reading of these 40 articles. Four of them were not available for download (10%) and were therefore excluded from the research. Of the 36 articles analyzed after full reading, 31 (86.1%) met the inclusion criteria and were selected.
Table 6 presents the references of the 31 studies analyzed in this SLR. All inclusion criteria were met by at least 1 study. Most of the studies presented approaches (IC1), models, techniques, and tools (IC2), and/or dimensions to evaluate services from the perspective of the user. In the next sections, we discuss the qualitative aspects of the 31 articles selected.

3. Evaluation Approaches

Our study identified four evaluation approaches of public services from the perspective of users: quality of services, user satisfaction, user experience, and success and acceptance of Information Systems (IS). Table 7 summarizes each study and their corresponding approach. In the following, we discuss the characteristics of each approach.

3.1. Service Quality

Quality is a broad and complex topic. It covers everything from institutional practices to the application of specific statistical tools and techniques to improve and monitor quality levels [18]. According to [45], the quality of services is intangible and therefore its measurement is complex. In general, the evaluation models based on the quality of services seek to measure the perceptions of the users against the results of service delivery.
Most studies that take this approach use or are based on the SERVQUAL [6] model, which adopts a scale divided into five dimensions: tangibles, responsiveness, reliability, assurance, and empathy. This instrument assesses users’ expectations and perceptions of a service. Based on this, several studies focus on the perceptions of quality, which is often compared to the expected quality to obtain a final metric. Thus, the user’s expectation and perception of the quality of the service received are compared.
Furthermore, some of these studies state that user satisfaction is a direct result of the quality of services. It is frequently pointed out that, despite the focus on the perception of users, a complete evaluation of the quality must include the entire service delivery mechanism, analyzing, for example, its technical and functional aspects.
According to [18], the most important factors that influence public service quality are:
  • Access to the service;
  • Communication level;
  • An understandable administrative system;
  • The ability to provide a flexible and quick reply;
  • Receptivity to service;
  • Politeness and kindness of the administration staff;
  • Credibility in the service supply;
  • Reliability and responsibility;
  • Security and quality of tangible aspects.
In [33], factors influencing the e-service quality of government portals were identified from an extensive review of research performed by academic scholars and practitioners. Seven constructs were identified from the analyses: citizen centricity, transaction transparency, technical adequacy, usability, complete information, privacy and security, and usefulness of information. These constructs can be used to assess the service quality of government portals.

3.2. User Satisfaction

User satisfaction is a psychological aspect related to the customer’s goals and needs. It serves as a benchmark tool for assessing user-oriented government strategies [37]. According to [24], satisfaction is a central field of interest for contemporary organizations. Other common dimensions to this approach are quality of services, customization or flexibility, utility, and user loyalty.
Evaluation based on user satisfaction generally comprises variables of service quality. Broadly speaking, service quality and use determine satisfaction. In [44], the authors argue that user satisfaction guarantees an immediate and significant response to the preferences and expectations of service users. However, the same authors also point out that many different models can be adopted to measure user satisfaction.
According to [36], governments ensure user satisfaction by utilizing the information and communication technology properly, specially the internet. By improving this channel of communication, the public sector ensures the accessibility and completeness of government information and service delivery in a more convenient way. However, the quality of websites is frequently overlooked during the design and implementation stages of online public services. Besides transparency, ease of navigation and comprehensive information require adequate monitoring resources, including targets for responses and reports. In fact, online users expects the organization to respond to their inquiries without delay. An immediate response will assist government website users to make decisions if it answers questions and resolves problems [29].
User satisfaction is a performance measurement that can be assessed with various techniques, for example Technology Acceptance Model (TAM), McLean and Delone Model, SERVQUAL, and EGOVSAT Model. It provides immediate, meaningful and objective feedback about user’s preference and expectations. On the other hand, the performance of public services will be evaluated in relation to a set of dimensions that indicate the strong and the weak factors affecting user satisfaction with public services [44].

3.3. User Experience

User experience encompasses all the tools, people, and processes involved in building a co-produced user experience [20]. It comprises all activities required of the user to obtain the service and/or fulfill their obligations to the government.
This approach correlates with the area of human–computer Interaction. As [38] suggest, it is closely linked to usability and can serve as an extension of it. Common dimensions to this approach are safety, flexibility or convenience, errors, and attractiveness.
According to [38], users have differing backgrounds, experiences and cultures. Users require effective, easy, and enjoyable interaction, which is key to successful use and acceptance of applications. In [23], a broad based research instrument was developed to capture perceptions of the user experience with the Nigeria Immigration Service website. They identified eight factors related to user experience: security and support, content and information, ease of use, benefits, barriers, convenience, trust, and website quality.
Usually, citizens do not have a clear understanding of the various functions of the service and how to navigate them, nor should they. In fact, [20] demonstrated that service usability can be improved by taking different perspectives of the service experience. However, the political will to make changes across government is missing. Public services will remain fragmented, reliant and difficult to consume until a conscious decision is made to evaluate the success of government based on the user experiences.

3.4. Success and Acceptance of IS

Understanding success in Information Systems (IS) is a complex challenge made more difficult when set in the public sector environment, while private sector studies may focus on profitability efficiency, quality, and reliability, public sector evaluation must combine these concerns with accountability, citizen trust, and the creation of public value [22].
To evaluate service delivery strategies, most of the studies identified in this SLR examine the success and acceptance of the IS based on the DeLone and McLean Model of Information Systems Success (D&M model) [21] and the Technology Acceptance Model (TAM) [47]. In [34], the authors emphasized that the DeLone and McLean IS success models are among few such theories and models that have helped researchers to establish some of the salient factors that influence the acceptance and use of public services by citizens before testing and post adoption. TAM [47] is a basic model used for technology acceptance studies. This model identifies the causal relationships between system design features, perceived usefulness, perceived ease of use, attitudes towards using, and actual usage behavior. It suggests that only ease of use and usefulness are insufficient to predict the behavior of users, as it is also necessary to include features such as system flexibility, system availability, information reliability, etc.
Studies that use acceptance of IS as an evaluation approach seek to measure service success by relating variables concerned with the overall quality, the constructs directly related to quality, and the behavioral and intentional variables of system use. The net benefit received by the user is also often analyzed. The variable of use—in its behavioral aspect—allows for determining the acceptance of the system. Then, the cyclical interrelationship between use, satisfaction and benefits makes it possible to point out the success of a system.
In conclusion, acceptance of the system and its subsequent use are vital points for the success and improvement of a user-oriented service.

4. Evaluation Dimensions

Similar evaluation dimensions are found in different models although these dimensions are constructed under a variety of perspectives and theories. In this sense, several models try to develop an evaluation based-approach that can be adapted to the situation. They may vary depending on the perspective used, the goal of the assessment and even the context of the object of study.
Table 8 shows the most used dimensions among the models and evaluation approaches identified in our SLR. It should be noted that similar dimensions may appear with slightly different names in the literature.
Satisfaction is the most used dimension to assess the performance of public services. The popularity of this dimension is because the studies evaluate public services from the perspective of the user; therefore, in general, they assess user satisfaction. Sigwejo and Pather [46] argue that satisfaction can serve as an important indicator of the effectiveness of digital public services.
Other common dimension to evaluate the performance of public services is related to the service quality. In general, this dimension measures perceptions about the delivery and quality of the services, investigating the gap between the user’s expectations and experiences. Service quality is often a predictor of user satisfaction. Traditionally, user satisfaction has been measured indirectly through information quality, system quality, service quality, and other variables [21].
There are also dimensions related to information and its quality. Broadly speaking, this dimension evaluates if the information provided to users is easily understandable, complete, and relevant. In [2], the authors indicate that perceived usefulness is influenced directly by beliefs about system quality and information quality. Information quality exerts the stronger effect on user satisfaction, in comparison to service quality.

5. Models, Instruments, and Tools

Two main research tools were identified in the SLR: questionnaires and interviews. Of the 31 articles, 21 used questionnaires, 7 performed interviews, and 5 used both. The questionnaires usually comprised the analysis of existing literature and reviews or interviews with government officials. The interviews were often used in support of the questionnaires, to raise the aspects determining the quality of the services or to validate the researchers’ ideas.
Some studies are based on bibliographic reviews to build their models [31,38]. Others make use of practices from other areas, such as focus groups [46]. Other practices, such as service documentation analysis [46] and analysis of user complaint reports [20] also appear.
The D&M model has a broad procedural focus, establishing specific causal relationships. The TAM model also has a procedural approach but considers a more individual rather than general perspective, emphasizing the influence of individual variables such as subjective norms on the decision-making process regarding the use of a service. SERVQUAL, in turn, focuses on the performance of the service, considering its general characteristics. Future studies may propose integrated models for evaluating public services.
Most studies use linear regression models or structural equations based on exploratory and confirmatory factorial analyses, such as Principal Component Analysis (PCA) and Structural Equation Modeling (SEM), to evaluate the structure of the measuring instrument, as well as reliability analyses to evaluate the consistency and reliability of the sample. Other models are also used, such as Partial Least Squares (PLS) [25], Multicriteria Satisfaction Analysis (MUSA) [24], Grounded Theory [46], content analysis [29], Principal Axis Factoring [33], and support of the Parasuraman Gap model [6].
Table 9 presents an overview of the models used according to the country. In general, research in developing countries adopts models already established in the area, without well-defined mechanisms for their adaptation, while developed countries create context-specific models, as is the case with the American Customer Satisfaction Index (ACSI) and the European Customer Satisfaction Index (ECSI).

6. Discussion

In this section, we discuss results observed from our systematic mapping.

6.1. Performance Evaluation of Public Services

Performance is a term that defines the successful fulfillment of one or more activities performed by an organization. In this sense, evaluating the performance of public services is an utmost task to reveal the full value of the service projects. In the public sector, performance is difficult to identify because the processes are more difficult to quantify than in the private sector where the main purpose is profit. This evaluation is also not effective without considering the perspectives of all the key issues held by each stakeholder, especially the final users [48].
The success of governmental organizations depends heavily the quality of the services they provide to citizens. People place strong importance on the attention the administration pays to the complaints they make. So, a useful and suitable reply system to deal with these complaints is one of the necessary improvements for all the administrations. As a consequence, government service managers should be able to reduce risks (e.g., investing valuable resources in service quality characteristics that may not work effectively).
However, it is important to know what dimensions of quality of public services contribute to enhance users’ satisfaction. As governments increase the development of online systems, it is necessary to assess the effectiveness of their digital services, while information systems (IS) success models have received much academic attention, little research has been conducted to assess the success of digital public services [21]. In this sense, there is a need to investigate whether the success of traditional IS models can be extended to online systems. Such evaluation efforts can enable government agencies to ascertain whether they can do the task required and delivering services as expected.
Currently, the user satisfaction dimension is strongly related with the management, use, and appearance of the online services, especially in town councils with a large population [25]. However, new technologies constitute a basic service to any citizen, regardless of the size of the town, so public actions are necessary to promote the use of new technologies. This would have a positive effect on the relationship between citizens and the public administration.
Service quality is another dimension related with the performance evaluation of public services. The quality of a service can help the public administration to make decisions that increase citizen satisfaction. Prior to building a service quality management system, the problems of the public service should be identified and the measurement tool of public service should be defined [18].
Finally, a limitation of most studies observed by our SLR is that the practical investigations of public services are based on a mandatory information system in a particular country and region, with specific characteristics. This can affect the generalizability of the results. Thus, the interpretation of the results should be confined to the countries and regions with similar settings.

6.2. Practical Utilization of Service Evaluation

The use of performance indicators by public sector organizations has been on the public sector management agenda in many governments around the world. This SLR identified works carried out to explore the evaluation of several applications according to the user’s perspective.
In [17], the authors provide a set of clear and useful e-health evaluation criteria that help improve the use of services by citizens, to integrate the e-health evaluation framework, and to address areas that require further attention in the development of future e-health initiatives. Other works, such as [2,36], focus on the assessment of services related to tax payment. The results point to strong connections between the following constructs: system quality, service quality, information quality, user satisfaction, and perceived usefulness.
Many works use indicators to evaluate the performance of local governments [18,25,37]. Local governments are administrative services that are smaller than a state. The term is used to contrast services at the nation-state level, which mention the central government or (where appropriate) the federal government. It is important to highlight the importance and shortage of studies about public service delivery and quality by local governments. Local organizations have the advantage of knowing better what their citizens want. This is very significant because the greater quality attributed to the public services by citizens, the less they will oppose financing them. This results in a better cooperation between citizens, and there will be a better match between what the citizens want and what the administration provides.
Finally, the need to know whether a public organization has met its aims has led to various forms of assessment. In this sense, it is necessary to develop more understandable forms to approach users more effectively. Some people find it difficult to understand forms they need to fill in. The use of plain language by the public administration is highly appreciated. In this sense, diverse means of communications (besides fax, telephone, Internet, etc.) can become valuable sources for users to evaluate their experiences with the service received. E-mail, comment sections, chat rooms, search features, broadcasting of government events, and web site personalization should facilitate connections and interaction between government institutions and users.

7. Final Considerations

This study identified the approaches for evaluating services from the perspective of the user, specifically the models and dimensions of evaluation of public services mostly used in the last 15 years. It was verified that most of the studies use or adapt older models, some of them developed in the private sector. The three most used models were the DeLone and McLean Model of Information Systems Success (D&M), the Technology Acceptance Model (TAM), and SERVQUAL.
This SLR also identified four approaches of service evaluation considering the user perspective: quality of service, user satisfaction, user experience, and success and acceptance of IS. It also identified three evaluation dimensions: satisfaction, service quality, and information quality.
The lack of service evaluation models that present clear guidelines for adaptation in different contexts creates a loophole for a model to be applied in completely different ways. This SLR showed that the main affected by this are developing countries, which generally use consolidated general models. On the other hand, developed countries usually have instruments specific to their contexts, as is the case with ACSI and ECSI.
A limitation of this paper is inherent in the design and execution of the primary studies reported from the results of a set of articles, which were restricted to the selected databases (Scopus, ACM, and IEEE). Future research may expand the databases and validate the findings obtained in this work.
Public sector services should be designed to allow citizens to manage their lives most efficiently. Such designs should be based on an understanding of the experience and the services shaped to integrate into the user’s world. A good strategy will avoid (or mitigate) failure, especially of electronic government projects. Several factors pertinent to public administration have been investigated and analyzed by previous projects. Knowing these factors allows governments to build more effective plans and actions that contribute positively to the success of their public projects while also avoiding problems that may arise in the perceived effectiveness of public sector services considering the expectations of users.

Author Contributions

Conceptualization, G.V.P., V.G.d.M., and M.P.P.d.S.; methodology, G.V.P. and V.G.d.M.; validation, V.G.d.M. and M.P.P.d.S.; formal analysis, V.G.d.M. and G.V.P.; investigation, V.G.d.M. and G.V.P.; data curation, G.V.P. and V.G.d.M.; writing—original draft preparation, G.V.P. and V.G.d.M.; writing—review and editing, G.V.P.; supervision, R.M.d.C.F.; project administration, R.M.d.C.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research is part of a cooperation agreement between the Information Technology Research and Application Center (ITRAC) and the Brazilian Ministry of Economy.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACSIAmerican Customer Satisfaction Index
D&MDeLone and McLean Model
ECExclusion Criteria
ECSIEuropean Customer Satisfaction Index
ICInclusion Criteria
ISInformation Systems
MQMain Question
PCAPrincipal Component Analysis
SEMStructural Equation Modeling
SERVQUALService Quality Model
SLRSystematic Literature Review
SQSecondary Question
TAMTechnology Acceptance Model
UTAUTUnified Theory of Acceptance and Use of Technology
XEExperience Effectiveness

References

  1. Gardenghi, J.L.; Pereira, L.G.; Alcantara, S.M.; Figueiredo, R.M.C.; Ramos, C.S.; Ribeiro, L.C.M. Digitalization by Means of a Prototyping Process: The Case of a Brazilian Public Service. Information 2020, 11, 413. [Google Scholar] [CrossRef]
  2. Floropoulos, J.; Spathis, C.; Halvatzis, D.; Tsipouridou, M. Measuring the Success of the Greek Taxation Information System. Int. J. Inf. Manag. 2010, 30, 47–56. [Google Scholar] [CrossRef]
  3. Soares, V.d.A.; Iwama, G.Y.; Menezes, V.G.; Gomes, M.M.F.; Pedrosa, G.V.; Silva, W.C.M.P.d.; Figueiredo, R.M.D.C. Evaluating Government Services Based on User Perspective. In Proceedings of the 20th Annual International Conference on Digital Government Research, Dubai, United Arab Emirates, 18–20 June 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 425–432. [Google Scholar] [CrossRef]
  4. Pedrosa, G.; Menezes, V.; Figueiredo, R. A Quality Management-Based Approach to Evaluate Public Services: A Case Study in Brazil. In Proceedings of the 53rd Hawaii International Conference on System Sciences, HICSS 2020, Maui, HI, USA, 7–10 January 2020. [Google Scholar] [CrossRef] [Green Version]
  5. Parasuraman, A.; Zeithaml, V.A.; Berry, L.L. A Conceptual Model of Service Quality and Its Implications for Future Research. J. Mark. 1985, 49, 41. [Google Scholar] [CrossRef]
  6. Parasuraman, A.; Zeithaml, V.A.; Berry, L.L. SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality. J. Retail. 1988, 64, 12–40. [Google Scholar]
  7. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef] [Green Version]
  8. DeLone, W.H.; McLean, E.R. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  9. Parasuraman, A.; Zeithaml, V.A.; Malhotra, A. E-S-QUAL: A Multiple-Item Scale for Assessing Electronic Service Quality. J. Serv. Res. 2005, 7, 213–233. [Google Scholar] [CrossRef]
  10. Pedrosa, G.V.; Kosloski, R.A.D.; Menezes, V.G.d.; Iwama, G.Y.; Silva, W.C.M.P.d.; Figueiredo, R.M.d.C. A Systematic Review of Indicators for Evaluating the Effectiveness of Digital Public Services. Information 2020, 11, 472. [Google Scholar] [CrossRef]
  11. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Keele University and Durham University Joint Report; Keele University: Keele, UK, 2007. [Google Scholar]
  12. Brereton, P.; Kitchenham, B.A.; Budgen, D.; Turner, M.; Khalil, M. Lessons from Applying the Systematic Literature Review Process within the Software Engineering Domain. J. Syst. Softw. 2007, 80, 571–583. [Google Scholar] [CrossRef] [Green Version]
  13. Kitchenham, B. Procedures for Performing Systematic Reviews; Keele University: Keele, UK, 2004; Volume 33, pp. 1–26. [Google Scholar]
  14. Kitchenham, B.; Brereton, P. A Systematic Review of Systematic Review Process Research in Software Engineering. Inf. Softw. Technol. 2013, 55, 2049–2075. [Google Scholar] [CrossRef]
  15. Petticrew, M.; Roberts, H. Systematic Reviews in the Social Sciences: A Practical Guide; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  16. Dieste, O.; Padua, A.G. Developing Search Strategies for Detecting Relevant Experiments for Systematic Reviews. In Proceedings of the First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), Madrid, Spain, 20–21 September 2007; pp. 215–224. [Google Scholar]
  17. Hamid, A.; Sarmad, A. Evaluation of E-health Services: User’s Perspective Criteria. Transform. Gov. People Process Policy 2008, 2, 243–255. [Google Scholar] [CrossRef]
  18. Huai, J. Quality Evaluation of E-Government Public Service. In Proceedings of the International Conference on Management and Service Science, Wuhan, China, 12–14 August 2011; pp. 1–4. [Google Scholar] [CrossRef]
  19. Fgee, E.B.; Alkallas, M.I. E-Government in Libya: Constraints, Potentials and Implementation. In Proceedings of the International Conference on Computer Applications Technology (ICCAT), Sousse, Tunisia, 20–22 January 2013; pp. 1–7. [Google Scholar] [CrossRef]
  20. Langham, J.; Paulsen, N.; Hartel, C. Evaluating Design Effectiveness for Public Sector Services: An Introduction to XE. In Proceedings of the 12th European Conference on Innovation and Entrepreneurship (ECIE), Paris, France, 21–22 September 2017; p. 8. [Google Scholar]
  21. Wang, Y.S.; Liao, Y.W. Assessing eGovernment Systems Success: A Validation of the DeLone and McLean Model of Information Systems Success. Gov. Inf. Q. 2008, 25, 717–733. [Google Scholar] [CrossRef]
  22. Scott, M.; Delone, W.; Golden, W. IT Quality and Egovernment Net Benefits: A Citizen Perspective. In Proceedings of the 19th European Conference on Information Systems, ECIS 2011, Helsinki, Finland, 9–11 June 2011. [Google Scholar]
  23. Okunola, O.M.; Rowley, J. Dimensions of the User Experience of E-Government Services: The Nigeria Immigration Service Website. In Proceedings of the 13th European Conference on EGovernment (ECEG 2013), Como, Italy, 13–14 June 2013. [Google Scholar]
  24. Skordoulis, M.; Alasonas, P.; Pekka-Economou, V. E-Government Services Quality and Citizens’ Satisfaction: A Multi-Criteria Satisfaction Analysis of TAXISnet Information System in Greece. Int. J. Product. Qual. Manag. 2017, 22, 82–100. [Google Scholar] [CrossRef]
  25. Gutiérrez Rodríguez, P.; Vázquez Burguete, J.L.; Vaughan, R.; Edwards, J. Quality Dimensions in the Public Sector: Municipal Services and Citizen’s Perception. Int. Rev. Public Nonprofit Mark. 2009, 6, 75–90. [Google Scholar] [CrossRef]
  26. Srivastava, S.; Teo, T.; Nishant, R. What Is Electronic Service Quality? In Proceedings of the 19th European Conference on Information Systems (ECIS), Helsinki, Finland, 9–11 June 2011.
  27. Jing, F.; Wenting, Y. A Study of G2C E-Government Citizen’s Satisfaction: Perspectives of Online Service Quality and Offline Service Quality. In Proceedings of the 11th International Conference on Service Systems and Service Management (ICSSSM), Beijing, China, 25–27 June 2014. [Google Scholar]
  28. Uthaman, V.S.; Ramankutty, V. E-Governance Service Quality of Common Service Centers: A Review of Existing Models and Key Dimensions. In Proceedings of the ICEGOV ’17: 10th International Conference on Theory and Practice of Electronic Governance, New Delhi, India, 7–9 March 2017; p. 2. [Google Scholar]
  29. Alanezi, M.A.; Mahmood, A.K.; Basri, S. E-Government Service Quality: A Qualitative Evaluation in the Case of Saudi Arabia. Electron. J. Inf. Syst. Dev. Ctries. 2012, 54, 1–20. [Google Scholar] [CrossRef] [Green Version]
  30. Patra, P.K.; Ray, A.K.; Padhy, R.; Patnaik, S. Electronic Governance Service Quality: A Study in the State of Odisha. Int. J. Serv. Technol. Manag. 2015, 21, 238. [Google Scholar] [CrossRef]
  31. Uthaman, V.S.; Vasanthagopal, R. A Comprehensive Multidimensional Conceptual Model to Assess the E-Governance Service Quality at Common Service Centers in India. In Proceedings of the Special Collection on eGovernment Innovations in India-ICEGOV ’17, New Delhi, India, 7–9 March 2017; ACM Press: New York, NY, USA, 2017; pp. 99–106. [Google Scholar] [CrossRef]
  32. Jang, C.L. Measuring Electronic Government Procurement Success and Testing for the Moderating Effect of Computer Self-Efficacy. Int. J. Digit. Content Technol. Its Appl. 2010, 4, 224–232. [Google Scholar] [CrossRef] [Green Version]
  33. Bhattacharya, D.; Gulla, U.; Gupta, M. E-service Quality Model for Indian Government Portals: Citizens’ Perspective. J. Enterp. Inf. Manag. 2012, 25, 246–271. [Google Scholar] [CrossRef] [Green Version]
  34. Rana, N.P.; Dwivedi, Y.K.; Williams, M.D.; Weerakkody, V. Investigating Success of an E-Government Initiative: Validation of an Integrated IS Success Model. Inf. Syst. Front. 2015, 17, 127–142. [Google Scholar] [CrossRef] [Green Version]
  35. Alfadli, I.; Munro, M. Citizen Centered E-Government Services Assessment Framework. In Proceedings of the European Conference on Digital Government, Santiago, Spain, 25–26 October 2018; p. 8. [Google Scholar]
  36. Saha, P.; Nath, A.; Salehi-Sangari, E. Success of Government E-Service Delivery: Does Satisfaction Matter? In Electronic Government; Wimmer, M.A., Chappelet, J.L., Janssen, M., Scholl, H.J., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6228, pp. 204–215. [Google Scholar] [CrossRef] [Green Version]
  37. Zheng, F.; Lu, Y. Influencing Factors of Public Satisfaction in Local Governments’ Overall Performance Evaluation in China. In Proceedings of the Fifth International Conference on Business Intelligence and Financial Engineering, Lanzhou, China, 18–21 August 2012; pp. 495–500. [Google Scholar] [CrossRef]
  38. Ssemugabi, S.; de Villiers, M.R. Make Your Choice: Dimensionality of an Open Integrated Conceptual Model for Evaluating E-Service Quality, Usability and User Experience (e-SQUUX) of Web-Based Applications. In Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists on—SAICSIT ’16, Johannesburg, South Africa, 26–28 September 2016; ACM Press: New York, NY, USA, 2016; pp. 1–10. [Google Scholar] [CrossRef]
  39. Arias, M.I.; Maçada, A.C.G. Digital Government for E-Government Service Quality: A Literature Review. In Proceedings of the 11th International Conference on Theory and Practice of Electronic Governance—ICEGOV ’18, Galway, Ireland, 4–6 April 2018; ACM Press: New York, NY, USA, 2018; pp. 7–17. [Google Scholar] [CrossRef]
  40. Alanezi, M.A.; Mahmood, A.K.; Basri, S. Conceptual Model for Measuring E-Government Service Quality. In Proceedings of the IEEE Conference on Open Systems, Langkawi, Malaysia, 25–28 September 2011; pp. 411–416. [Google Scholar] [CrossRef]
  41. Almalki, O.; Duan, Y.; Frommholz, I. Developing a Conceptual Framework to Evaluate E-Government Portals’ Success. In Proceedings of the 13th European Conference on E-Government, Varese, Italy, 13–14 June 2013; Volume 1, pp. 19–26. [Google Scholar]
  42. Anwer Anwer, M.; Esichaikul, V.; Rehman, M.; Anjum, M. E-Government Services Evaluation from Citizen Satisfaction Perspective: A Case of Afghanistan. Transform. Gov. People Process Policy 2016, 10, 139–167. [Google Scholar] [CrossRef]
  43. Singh, V.; Singh, G. Citizen Centric Assessment Framework for E-Governance Services Quality. Int. J. Bus. Inf. Syst. 2018, 27, 20. [Google Scholar]
  44. Alias, E.S.; Mohd Idris, S.H.; Ashaari, N.S.; Kasimin, H. Evaluating E-Government Services in Malaysia Using the EGOVSAT Model. In Proceedings of the 2011 International Conference on Electrical Engineering and Informatics, Bandung, Indonesia, 17–19 July 2011; pp. 1–5. [Google Scholar] [CrossRef]
  45. Amritesh; Misra, S.C.; Chatterjee, J. Applying Gap Model for Bringing Effectiveness to E-Government Services: A Case of NeGP Deployment in India. Int. J. Electron. Gov. Res. 2013, 9, 43–57. [Google Scholar] [CrossRef]
  46. Sigwejo, A.; Pather, S. A Citizen-Centric Framework For Assessing E-Government Effectiveness. Electron. J. Inf. Syst. Dev. Ctries 2016, 74, 1–27. [Google Scholar] [CrossRef] [Green Version]
  47. Venkatesh, V. Determinants of Perceived Ease of Use: Integrating Control, Intrinsic Motivation, and Emotion into the Technology Acceptance Model. Inf. Syst. Res. 2000, 11, 342–365. [Google Scholar] [CrossRef] [Green Version]
  48. Modesto, A.S.C.; Figueiredo, R.M.d.C.; Ramos, C.S.; Santos, L.d.S.; Venson, E.; Pedrosa, G.V. Organizational Strategies for End-User Development—A Systematic Literature Mapping. Informatics 2021, 8, 15. [Google Scholar] [CrossRef]
Figure 1. Database research filters.
Figure 1. Database research filters.
Information 13 00162 g001
Figure 2. Full research process of paper selection.
Figure 2. Full research process of paper selection.
Information 13 00162 g002
Table 1. Research questions.
Table 1. Research questions.
IDResearch Question
MQWhat are the existing approaches to evaluate public services from the perspective of the user?
SQ1What are the dimensions used to evaluate public services from the perspective of the user?
SQ2What models, instruments, and tools have been adopted to assist in the evaluation of a service, considering the satisfaction of the user?
Table 2. Research terms based on the PICOC approach.
Table 2. Research terms based on the PICOC approach.
TermSynonyms
PopulationService evaluationService quality, service assessment
InterventionCitizen perspectiveUser perspective, user satisfaction
Comparison--
OutcomeModelTools, dimensions, approach
ContextGovernmentPublic sector, Public services
Table 3. Inclusion criteria.
Table 3. Inclusion criteria.
IDInclusion Criterion
IC01Studies that present approaches for the evaluation of public services according to the perspective of the user
IC02Studies that present models, techniques, and tools for the evaluation of public services according to the perspective of the user
IC03Studies that suggest dimensions for the evaluation of public services according to the perspective of the user
IC04Studies that present a history of the evaluation of public services according to the perspective of the user
IC05Studies that present reports of evaluation of public services according to the perspective of the user
Table 4. Exclusion criteria.
Table 4. Exclusion criteria.
IDExclusion Criterion
EC01Studies related to the evaluation of public services from a point of view other than that of the citizen/user
EC02Studies related to the evaluation of services in the context of industry
EC03Studies related to digital government outside the context of service evaluation or quality of services
EC04Studies written in a language other than English
EC05Studies that do not fit into any inclusion criteria
Table 5. Databases’ results.
Table 5. Databases’ results.
DatabaseResults
SCOPUS80
IEEE Xplore13
ACM Digital Library20
Total113
Table 6. List of papers selected.
Table 6. List of papers selected.
CodeRef.CodeRef.CodeRef.CodeRef.
S01[17]S09[18]S17[19]S25[20]
S02[21]S10[22]S18[23]S26[24]
S03[25]S11[26]S19[27]S27[28]
S04[2]S12[29]S20[30]S28[31]
S05[32]S13[33]S21[34]S29[35]
S06[36]S14[37]S22[38]S30[39]
S07[40]S15[41]S23[42]S31[43]
S08[44]S16[45]S24[46]--
Table 7. Evaluation approaches and their respective studies.
Table 7. Evaluation approaches and their respective studies.
ApproachStudies
Service qualityS03, S07, S09, S11, S12, S16, S20, S29, S30
User satisfactionS06, S08, S14, S23, S26
User experienceS18, S22, S25
Success and acceptance of ISS02, S04, S05, S10, S13, S15, S19, S21, S27, S28, S31
Table 8. Evaluation dimensions and their respective studies.
Table 8. Evaluation dimensions and their respective studies.
DimensionStudies
SatisfactionS01, S02, S03, S04, S06, S07, S11, S14, S15, S18, S21, S22, S24, S26, S27, S28
Service qualityS02, S03, S04, S06, S07, S12, S13, S15, S19, S23
Information qualityS02, S04, S07, S12, S15, S18, S19, S21, S22, S27, S28, S31
Table 9. Models and context of application.
Table 9. Models and context of application.
ModelCountry
D&MChina, Greece, India, Taiwan
SERVQUALChina, India, Singapore
ECSFTanzania
TAMIndia, Saudi Arabia
UTAUTIndia
EGOVSATMalaysia
XEAustralia
ACSIUSA
ECSIEurope
Context-specificAfghanistan, Greece, Nigeria, Saudi Arabia, Spain, Sweden
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Menezes, V.G.d.; Pedrosa, G.V.; Silva, M.P.P.d.; Figueiredo, R.M.d.C. Evaluation of Public Services Considering the Expectations of Users—A Systematic Literature Review. Information 2022, 13, 162. https://doi.org/10.3390/info13040162

AMA Style

Menezes VGd, Pedrosa GV, Silva MPPd, Figueiredo RMdC. Evaluation of Public Services Considering the Expectations of Users—A Systematic Literature Review. Information. 2022; 13(4):162. https://doi.org/10.3390/info13040162

Chicago/Turabian Style

Menezes, Vítor G. de, Glauco V. Pedrosa, Marcos P. P. da Silva, and Rejane M. da C. Figueiredo. 2022. "Evaluation of Public Services Considering the Expectations of Users—A Systematic Literature Review" Information 13, no. 4: 162. https://doi.org/10.3390/info13040162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop