Next Article in Journal
Foreign Ownership and Corporate Social Responsibility: Evidence from China
Next Article in Special Issue
What Do We Know about Co-Working Spaces? Trends and Challenges Ahead
Previous Article in Journal
Indigenous Knowledge and Endogenous Actions for Building Tribal Resilience after Typhoon Soudelor in Northern Taiwan
Previous Article in Special Issue
Categorizing Quality Determinants in Mining User-Generated Contents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality Assessment of the Services Delivered by a Court, Based on the Perceptions of Users, Magistrates, and Court Officials

by
Patrícia Moura e Sá
1,
Maria João Rosa
2,*,
Gonçalo Santinha
3 and
Cátia Valente
4
1
Faculty of Economics & CICP, University of Coimbra, 3004-512 Coimbra, Portugal
2
CIPES, DEGEIT, University of Aveiro, 3810-193 Aveiro, Portugal
3
GOVCOPP, DCSPT, University of Aveiro, 3810-193 Aveiro, Portugal
4
DCSPT, University of Aveiro, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(2), 504; https://doi.org/10.3390/su13020504
Submission received: 8 December 2020 / Revised: 30 December 2020 / Accepted: 4 January 2021 / Published: 7 January 2021
(This article belongs to the Special Issue Quality Management and Standardization for Sustainability)

Abstract

:
This paper aims to measure the quality of the services delivered by a court by assessing the satisfaction of court users and service providers, i.e., magistrates and court officials. For that purpose, a case study was carried out and data were collected by means of a questionnaire based on the SERVPERF instrument, in which perceived service quality is measured, considering court users, magistrates, and court officials’ perceptions of post-service performance. One hundred and fifty-eight questionnaires were successfully returned. An in-depth interview was later conducted to the court administrator to gain a richer understanding of the results achieved and ask follow-up questions. Overall, findings revealed that court users, magistrates, and court officials clearly have a positive view of the services provided, although improvement is needed, particularly in the court’s facilities and technological equipment. The current research sheds some light on the potentialities and difficulties of assessing service quality in the judiciary and contributes to the validation of the SERVPERF instrument in this context.

1. Introduction

There is a large consensus that citizen expectations regarding public services have increased in recent decades, becoming an important driver to improve such public services quality. High quality public services are expected to be delivered with integrity, and are focused on citizen needs, being effective, transparent, and accessible to all, including those who are more vulnerable [1].
National and international guidelines have been defined to support, guide, and encourage those who want to modernize public administrations and improve their quality. Launched as a reference to existing European Union (EU) policies and international practices, the European Commission (EC) Quality of Public Administration Toolbox for Practitioners [2] states in its forward that “the quality of a country’s institutions, both governmental and judicial, is a key determining factor for its well-being” [2] (p. 1). In this document, it is stressed that quality, independence, and efficiency are key components for an effective justice system, as well as key messages closely related to those of the European Convention on Human Rights (ECHR), the Europe 2020 Strategy, the EU Justice Agenda for Europe 2020 and, ultimately, the 2030 Agenda for Sustainable Development, expressed mainly through Goal 16 (promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels).
However, compared to other areas of the public sector, the judiciary context has surprisingly attracted considerably less attention, and very few studies have been carried out to analyze the quality of the services provided by the courts.
Courts tend to be regarded as very bureaucratic and inflexible organizations, incapable of meeting the demands of a quickly changing world [3]; thus, being slow and inept to cope with an increasing demand for judiciary services. According to [4], the difficulty of introducing reforms in courts is strongly related to two particular features of such organizations: (1) courts are dominated by professionals (judges), whose institutional identities and commitments are relatively low if compared to their professional integrity and attachment to personal styles and preferences; and (2) judges lack authority over people key to the court administration. Other characteristics of bureaucratic organizations are also visible. For instance, typically in the judiciary system, there is a silos culture with very limited teamwork—”Work flows from specialized functional desk to specialized functional desk in assembly-line fashion” [4] (p. 1868).
Clearly, some changes have recently been introduced. As reported by researchers from different countries (e.g., [5]), in Portugal, significant investments have been made to improve court efficiency through modernization and reorganization programs. In particular, a new model of judiciary organization and management has been implemented in 2014. The model aims to improve the responsiveness of the system and the quality of the services provided, by stressing court specialization, making the assignment of processes to a court/judge more rational and fighting slowness. At the same time, the model encourages autonomy in court management and the adoption of managerial practices, such as management by objectives (Decree-Law n. 49/2014). Yet, very few opportunities exist for management training for judges.
Despite the importance of making the judiciary system closer to its users, translating the idea of customer focus to the courts context raises some particular concerns. As [6] (p.96) stress, “judiciary activities are not based on customer satisfaction, but rather on the compliance with the principles of the law and on knowledge of ‘technical’ nature”. However, it must be noted that this idea is particularly relevant when it comes to assessing the quality of judiciary decisions, which is not clearly the focus of this research. By being focused on what is commonly known as functional quality [7], (i.e., on the aspects that relate to the interaction between service providers and users that takes place in a court, this study does not intend to measure judiciary processes quality (technical quality).
The measurement and evaluation of service quality have been widely discussed in the literature, with many studies focused either on business or on public organizations [8]. Furthermore, as pointed out by [8], previous research gives considerable evidence on the adequacy of SERVQUAL scales to evaluate services provided by the public sector. However, scarce research has been conducted in the judiciary field.
The aim of this paper is, thus, to illustrate how the SERVPERF instrument can be used to assess the quality of the services delivered by a court based on the perceptions of court users and service providers, namely magistrates and court officials.
Service quality was assessed in this study based on the SERVPERF instrument, which, although using the general dimensions proposed in the SERVQUAL model [9,10], as the name suggests, only takes the user perceptions of the performance achieved to measure the quality of the services provided. The SERVPERF instrument was proposed by Cronin and Taylor in the nineties [11], since the authors have considered that, rather than being based on the user expectations, service quality should be regarded as a form of consumer attitude, involving both an end state and a process.
Specifically, the paper addresses three main questions:
  • Is the SERVPERF instrument valid to assess the quality of the services delivered by a court?
  • How do court users and judiciary operators (magistrates and officials) perceive the quality of courts?
  • Are there substantial differences between external (users) and internal (magistrates and officials) views regarding satisfaction with the services provided?
For the purposes of the current research, an empirical case was selected to illustrate how the SERVPERF instrument could be adopted to the judiciary context. The illustrative case corresponds to a big court of Centre Region of Portugal. This court is one of twenty-three counties (comarcas) in the country. Being one of the major courts (it serves a region with around 430.000 inhabitants), it has competencies in all generic and specialized fields (including labor and family). According to the statistics available, the court had, at the end of 2016, about 29,000 cases (almost 50% of the number for 2014, which reveals a substantial increase in the level of efficiency).
The current paper is expected to contribute to validate the SERVPERF model in a context that is under-researched and that calls for the adaptation of the scales used to capture the specificities of a court. In addition, it should be highlighted that, to date, few studies have compared the perspectives of users and employees (in this case magistrates and court officials) in relation to perceptions of the level of service quality, and to the best of the authors’ knowledge, this is the first paper to address the issue in the judiciary system. Capturing the perceptions of magistrates and court officials is highly relevant since they play a critical role in service delivery, including listening to user needs and concerns. Analyzing the similarities and differences between the end beneficiaries of a service and its providers is highly relevant [8]. In particular, having alignment between internal and external perceptions is essential to properly identify priority improvement actions. Accordingly, two hypotheses will be tested:
Hypothesis 1 (H1).
The SERVPERF is a valid instrument to assess the quality of the services delivered by a court.
Hypothesis 2 (H2).
There are no significant differences between external (users) and internal (magistrates and officials) views regarding satisfaction with the services provided by a court.
The remaining article is organized in five sections. Section 2, of conceptual nature, makes a brief overview of the quality of judiciary services. Section 3 contextualizes the empirical analysis and explains the methodology. In Section 4, findings are presented and analyzed. Section 5 summarizes the main conclusions and reflects on the relevance of the study for policymaking and court management. In Section 6, the study limitations are identified, suggesting directions for future research.

2. Quality in the Judiciary: A Quick Overview

It is widely acknowledged that justice is a very particular service and that courts are peculiar organizations. In fact, if it is clear that judiciary services respond to collective needs and are thus classified as services of general interest, they are strongly based on the principles of the law and justice and have a singular situation among state functions [12]. Moreover, as [13] stresses, courts have multiple stakeholders, often with conflicting interests—litigants, victims, witnesses, lawyers, magistrates, media, taxpayers, etc.—making it virtually impossible to please all of them simultaneously.
Courts can be regarded as professional bureaucracies [14]. Bureaucratic organizations are characterized by rationality, impersonality, and impartiality [15]. These features are important to ensure that justice is applied in a neutral and equitable way to all citizens. Education and training are essential to professionalism [15]. In the case of professional bureaucracies, the role (and power) of certain professionals is particularly high and has to be taken into account when implementing management principles and practices. Despite rationality and impartiality claims, the bureaucratic model, in place in the judiciary, has been stressed as one of the main drivers of the judicial crisis existent in most countries. Long lead times, poor quality-of-service, and reduced throughput are regarded as being responsible for a diminished perception of the effectiveness of legal systems [16].
Initiatives taken to change the way the system is run and managed tend to show the impact of New Public Management (NPM). In fact, NPM has become popular in the nineties, showing the influence of two main schools: public choice and managerialism [17]. In a seminal work, [18] points out eight essential elements of NPM: (1) cost cutting; (2) disaggregating traditional bureaucratic organizations into separate agencies; (3) decentralization of management authority within each public agency; (4) separating the functions of public service provision from purchasing; (5) introducing market and quasi market-type mechanisms; (6) requiring staff to work to performance targets, indicators, and output objectives; (7) establishing greater flexibility in public employment; and (8) increasing emphasis on service quality and customer responsiveness. Over the years, NPM has been subject to many criticisms [19,20]. According to some scholars, relevant principles of public services are neglected by NPM, namely citizenship, justice, equality, and dependence. In any case, the measures and frameworks introduced in the judiciary to foster performance management and results orientation clearly show the influence of NPM. A common feature of the improvement initiatives that have been implemented in the judiciary system worldwide is the introduction of feedback mechanisms, including performance measures [16].
Among the performance measurement initiatives implemented in the judiciary, it is also worthwhile to mention the development of the Trial Court Performance Standards (TCPS) in the United States in 1987. Despite not being an organizational assessment model [3], it was, according to [21], an important landmark in the introduction of quality management principles in courts, especially by pointing out the importance of collecting and analyzing performance data and introducing a customer focus. In this regard, as stressed by [4] (p. 1873), standard 1.4 states ”judges and other trial court personnel are courteous and responsive to the public and accord respect to all with whom they come into contact”. Overall, the TCPS includes 68 measures and 22 standards, clustered around five areas of measurement: (1) access to justice; (2) expedition and timeliness; (3) equality, fairness, and integrity; (4) independence and accountability; and (5) public trust and confidence [21].
Southeast Asian countries have also developed their own instruments. The most well-known is the Singaporean Justice Scorecard, created in the late nineties. Inspired by the Balanced Scorecard principles, the Justice Scorecard analyses performance according to four perspectives: community; internal processes; learning and growth; and financial [21]. Similarly, in Europe, some performance indicators were introduced, including [13]: turn-around times (i.e., the time between initiation and close of a task), average number of drafts of documents, volume of customer complaints, etc. Later on, in 2004, the Committee on Civil Justice and Home Affairs recommended the creation of a quality charter to improve quality in courts. The system was supposed to include a set of criteria for evaluating judicial systems, benchmark information, a system for dissemination of best practices amongst the member states of the European Union, and an evaluation report on compliance with the quality charter [3]. Yet, no practical developments have occurred.
Perhaps the most sophisticated framework for quality management adoption in courts was proposed in 2008 by the International Consortium for Court Excellence, based on Singapore. The Framework for Court Excellence comprises a set of criteria specifically designed to the courts grouped into seven areas [22]: (1) court leadership and management; (2) court planning and policies; (3) court resources; (4) court proceedings and processes; (5) client needs and satisfaction; (6) affordable and accessible court services; and (7) public trust and confidence. Overall, it aims to raise quality awareness in district courts throughout the world and establish a new Global Court of Excellence Award.
According to [23], sophisticated frameworks, such as the TCPS, require considerable data collection efforts, and are difficult to sustain in the long run, leading to the development of simpler systems. In particular, the authors highlight the strengths of CourTools, an approach based on 10 measures, namely: access and fairness, clearance rates, time to disposition, age of active pending caseload, trial date certainty, reliability, and integrity of case file, collection of monetary penalties, effective use of jurors, cost per case, and court employee satisfaction.
Under the influence of the European Union, and particularly due to the efforts of the European Commission for the Efficiency of Justice (CEPEJ), court satisfaction surveys have been promoted to improve the measuring and functioning of the judicial systems of the member States. According to the CEPEJ, such surveys should collect information from different court stakeholders, including litigants, legal professionals, judges, court staff, court users, and the general public. Questionnaires have been regarded as important tools in this regard, even if more qualitative approaches should also be used to complement such feedback. CEPEJ inclusively developed a questionnaire guideline that covers different dimensions, such as court premises, organization of work, availability of information, level of service, and expertise provided by judges and court staff, the timeliness and legibility of judicial decisions.
Portugal has not been immune to the international tendencies of introducing performance-based management and customer-focused approaches in the public sector [24]. Over the last decade, the idea that courts had to be evaluated by their users was reinforced and, in 2013, a pilot study related to the measurement of users’ satisfaction in Campus da Justiça (Lisbon) was carried out. For that purpose, a questionnaire, entitled as the Courts Quality Barometer, was applied. The instrument proposed measured user satisfaction regarding the following items: conditions of access to the court, signposting in the court building, waiting conditions, courtroom furnishing, clarity of summonses, time lapse between the summon and the hearing, punctuality of hearings, attitude and courtesy of court staff, level of competence of non-judicial court staff, attitude and courtesy of judges and prosecutors, language used by the judges and prosecutors, the time allowed to set out arguments at the hearing, timeframe for the delivery of judgments, clarity of judgements, and information provided by the court’s information service (https://rm.coe.int/168074816f). Findings for Campus da Justiça revealed a mixed picture. Timeliness questions got the lowest scores (around 3.5 out of 10 points), while conditions of access to the court, signposting in the court building, and information provided emerged as main strengths (with scores between 7 and 8 points). Overall, the average satisfaction level was 6.2 (https://www.dn.pt/portugal/utentes-insatisfeitos-com-tribunais-do-campus-de-justica-3837036.html). As a result of this pilot study, in 2016, satisfaction surveys started to be administered in Portugal on a regular basis to the users of courts, as part of the Program Closer Justice (Programa Justiça + Próxima). Yet, the results of the application of these instruments are not publicly available. Moreover, the most recent CEPEJ report on the matter (https://rm.coe.int/rapport-avec-couv-18-09-2018-en/16808def9c) shows that Portugal is still not systematically assessing court satisfaction from key stakeholder perspectives, namely judges and court staff. In the justice field, the Portuguese government selected three priorities for the 2015–2018 period: to improve access to the judiciary system, to achieve timeliness resolution, and to increase organizational efficiency (available at http://www.dgaj.mj.pt/). These priorities are linked with the purposes of guaranteeing that citizens have equal treatment regardless of where they live, enhancing transparency, and improving the quality of the interaction between users and service providers. External communication, training, resources upgrading, standardization of procedures, and simplification are elected areas of improvement.
Besides these performance measurement initiatives already mentioned, it is possible to find, in international literature, scattered evidence of the adoption of other quality management models and frameworks. For instance, ref.[25] report the implementation of ISO 9001 standards in district courts in Argentina, Colombia, and Spain, concluding that many problems arise when translating such standards to the courts, including budget constraints in financial resources required for consultancy, training, implementation, certification, and re-certification; the resistance from public servants to ISO 9001 implementation due to belief it is a standard exclusively designed for manufacturing environments; and the interpretation and implementation constraints for many standard specific requirements. The legal sector has also benefited from the Lean Thinking approach with the purpose of increasing efficiency, reliability, and timeliness [26]. One such study was conducted in the Portuguese court system looking at waiting time, inventory, transportation, and overproduction indicators, in order to understand how courts could become more responsive [27].
When reviewing the main performance measurement frameworks and tools applied to judiciary, ref.[23] identified four major trends: (1) growing efforts to measure the quality in objective terms through statistics, and numbers; (2) tensions and conflicts between what can be identified as the managerial and the traditional legal or judicial perspective; (3) some attempts to include the people and their expectations in the assessment and improvement of quality; and (4) a weak link between the evaluation mechanisms and efforts to improve the quality of justice.
Service quality models fall into the third category/trend, since they measure quality based on the views of people (mainly users), either by considering both expectations and perceptions (as it happens with the SERVQUAL instrument) or by taking perceptions only (as it occurs with the SERVPERF questionnaire).
The literature that reports the use of the SERVQUAL instrument in the public sector is vast and covers a wide range of services [28,29,30]. Applications of the SERVPERF instrument are fewer, but not rare (e.g., [31]). However, to the best of our knowledge, there are no studies reported in the literature in which the SERVQUAL or the SERVPERF instrument were used to assess the quality of the services in the judiciary.

3. Materials and Methods

3.1. Instrument Used to Assess Service Quality

With the specific purpose of assessing the quality of the services provided, a profusion of instruments has been developed, many of them (see, for instance, [32]) following the lines of the SERVQUAL instrument developed by Parasuraman, Zeithaml and Berry in the eighties [33]. The framework is based on the premise that customer satisfaction depends on the comparison between expectations and actual service performance [9,34,35]. This is in line with the confirmation/disconfirmation paradigm [36], meaning that quality is assessed according to the comparison the user makes between his/her expectation and what he/she actually gets.
According to the SERVQUAL model, service quality comprises a set of key dimensions. Although such dimensions have been presented differently over time, there is a large degree of consensus around the following description [10]:
  • tangibles (the appearance of physical facilities, equipment, personnel, and communication material);
  • reliability (the ability to perform the promised service dependably and accurately);
  • responsiveness (the willingness to help customers and provide prompt service);
  • empathy (the caring, individualized attention provided to the customer) and
  • assurance (the knowledge and courtesy of employees and their ability to convey trust and confidence)
Each dimension is represented by a set of measurement items in a questionnaire. Traditionally, in total, the questionnaire has 22 items, measured using a Likert scale.
Despite its popularity, the SERVQUAL instrument has some shortcomings. Amongst others, ref.[37] highlight the fact that, on the one hand, it confuses outcome, process, and expectation, and, on the other hand, it is too broad and must be tailored to the service in question.
Several other methodologies for service quality assessment exist. The most relevant are the SERVPERF (proposed by [11]), based on the same five dimensions of the SERVQUAL instrument (see, for instance, [38]), but focused exclusively on customer perceptions), Normed Quality (developed by [39] to explore the meaning of expectations) and QUALIMETRO (conceived for the evaluation and on-line service control). In general, these alternative approaches can be grouped into what have been called perception models [32,40]), which argue that it is preferable to measure service quality based only on perceptions of performance. This was the perspective used in the current paper.
For data collection, two versions of the SERVPERF questionnaire were designed: one administered to the court users and another applied to the magistrates and court officials. When designing the questionnaire, the researchers took advantage of some prior contact with the court, previous meetings with judges and court officials, and on-the-spot observation of users’ behavior. Then, the two versions of the questionnaire were pilot tested. As a result of the pilot tests, some items were added and the wording of several others were modified to improve their clarity. For instance, in the tangibles dimension, two items were added, one relating to the signage of the court premises and the other concerning the accessibility to disabled users. In the responsiveness dimension, one item was included to assess the perceptions regarding the use of plain language, and comprised three sections. The first one presents the purpose of the study; the second one collects personal data from the participants (gender, age, qualification, frequency of contact with judicial services and role played); and, lastly, the SERVPERF items grouped into the five dimensions (tangibles, reliability, responsiveness, empathy, and assurance), which were assessed by the respondents using a 5-point Likert scale.
Additionally, at a later stage, to discuss the findings and have an inside perspective of the efforts undertaken by the court to improve service quality, an exploratory interview was conducted with the court administrator. By managing the daily operations and the budget of the courthouse and directing court employees, the court administrator has a relevant role as a decision maker; hence, being able to explain in more detail some of the results achieved, and make the necessary management adjustments to overcome some of the problems found.

3.2. Data Collection and Sample Features

The questionnaires were self-administered in September and October 2017 to the users, magistrates, and officials of the selected court. Procedures were put in place to ensure the respondents’ anonymity and responses confidentiality. Moreover, the semi-structured interview with the court administrator took place in November 2017.
When data were collected, the court had 55 magistrates and 123 officials. Moreover, 115 out of the 173 administered questionnaires were successfully completed and returned. Thus, the response rate was high (66%) for this kind of instrument (see Table 1). Moreover, 43 users also answered the questionnaire. The questionnaire was administered to the users face-to-face. The vast majority of the persons approached were keen to participate once they were informed about the study purpose. All of the questionnaires administered were successfully completed. Administering the questionnaire face-to-face has the advantage of allowing clarifications, but is a very time-consuming process. Therefore, in the time span of the project it was not possible to have a larger users’ sample.
As indicated in Table 1, the majority of the users who participated in the study were male individuals, between forty and fifty years old, who completed the secondary school level and who currently work. For the large majority, this was not the first time they came to the court, although for many, the contact with judiciary services was occasional. Most of the users were there as witnesses, alone (with no lawyer). Concerning the participant magistrates, women dominate, with the vast majority working in the judiciary system for at least ten years. Finally, concerning officials, it is possible to state (once again) that females are the majority, with most of them working in the system for over twenty years. As expected, their level of qualification is significantly lower than that of magistrates, even if important exceptions exist.
Data collected through the questionnaires was statistically analyzed using the SPSS. Descriptive statistics and non-parametric hypothesis tests (Kruskal–Wallis (KW) for a 0.05 significant level) were conducted over the answers given to the questionnaire by users, magistrates, and officials of the selected court. The KW test was used, since, as a nonparametric statistic, it does not require assumptions regarding normal distribution and equal variance on the scores across groups (contrarily to ANOVA). Additionally, it can be used for both continuous and ordinal-level dependent variables.
As suggested in the literature, content validity was established by conducting a thorough examination of previous empirical and theoretical papers relevant to the current study, and by conducting a pilot study before starting the fieldwork. Furthermore, the quality of the scales was assessed using Cronbach’s alpha. Table 2 shows the results. Since all of the dimensions have a coefficient well above the 0.7 cutting point, the scales used to measure the various dimensions can be regarded as highly consistent.
The interview conducted with the court administrator was analyzed by looking at the opinions conveyed regarding the results obtained for each dimension of the model and the general perception of the usefulness and relevance of the SERVPERF instrument applied.

4. Main Findings

This section presents the main findings obtained from the analysis of the questionnaires, looking at the perceptions of users, magistrates, and court officials in the various dimensions of the SERVPERF instrument. Such perceptions were later discussed with the court administrator. The views of the court administrator on the matter are also reported in this section.
Table 3 summarizes the key results.
As shown in Table 3, all of the dimensions are positively evaluated by the various stakeholders, except for the tangibles one. Clearly, this dimension gets the lowest scores (for all participants, but especially for those who work in the court). This might not come as a surprise, since equipment failures and drawbacks of old (and often inadequate) buildings are often reported in the media. Moreover, these aspects have a greater impact on those who work daily in the judiciary system, jeopardizing their efficiency and comfort. Furthermore, for the tangibles, the KW test shows that the perception of internal and external users is statistically distinct, indicating that the discomfort associated with the tangibles is likely not so visible to many users who only visit the courtroom.
This finding came as a surprise to the court administrator:
“Interestingly… I was expecting the opposite [that the perceptions of external users would be lower than those of people who work here]. Usually people get used to the working conditions they have, the facilities… even if they are bad, people tend to think ‘I am used to this place’”.
The remaining dimension scores are typically between 3.7 and 3.9, with not much heterogeneity. Yet, it is interesting to observe that the perceptions of the magistrates tend to be lower than those of users and officials, potentially showing that their level of exigency is greater.
The detailed scores obtained for each questionnaire item are provided in Appendix A (see Table A1).
Table 4, Table 5 and Table 6 highlight the items that got the highest and lowest scores for each of the stakeholders analyzed.
Looking at the items with the highest and lowest scores for each group, it becomes evident that there is a high degree of consensus regarding the main weaknesses and strengths of the services provided. In fact, unsurprisingly, tangible items are among the weakest points to all participants, notably technological equipment and signage. Concerning the items with the highest scores, there is a greater variety. Yet, the knowledge of the professionals to convey trust on the service provided is acknowledged by internal and external stakeholders. Additionally, participants agree that court officials show care for the users. The main point of disagreement apparently concerns the ability of court officials to provide a quick service (a major strength in the court officials’ view and a main weakness for the users). This finding is in line with the prevalent idea that justice in Portugal is slow, and that might be reflected in the way the service is provided by court officials. On the other hand, internally, by taking into account different constraints, people tend to think that the service is provided in due time. The court administrator acknowledges this perception gap stating that:
“magistrates do not have such a positive feeling of promptness… court officials have an idea that we do better than we actually do according to the users, isn’t it?... well… they think they are always willing to help…”.
In the court administrator’s view, some findings are somehow surprising. One such unexpected result concerns the perceptions regarding the language used. Considering that court language is often regarded as complex by ordinary citizens, the administrator said:
“it is curious that users have a positive perception… that judges and court officials have a more negative perception of the language used. I think it should be the other way round… we are working to use a plainer language”.
The fact that assurance ranks high in the users’ view (3.97) was also highlighted in the interview:
“It is good to see that users have a much better perception of the court professionals’ competencies and knowledge than we usually think… there has been an effort to give people more training opportunities…”.
Overall, the court administrator showed a very positive perception of the usefulness of the SERVPERF questionnaires:
“I think that these questionnaires are interesting and relevant… I am surprised that people’s perceptions about the court are not so bad… I am very pleased to know that information and the way we deal with the users got such a positive assessment…”.
On the other hand, the administrator is aware of the need for further improvement:
“in some cases, there is a lack of self-criticism… there is no perception that is possible to do even better”.
Overall, looking at the various dimensions and associated items, the considerable degree of agreement among the various stakeholders is particularly relevant, making it easier to identify the key improvement areas to be dealt with first.
As already mentioned, and to the best of our knowledge, there are no studies reported in the literature in which the SERVPERF instrument was used to assess the quality of the services in the judiciary. Our study is a novelty in service quality research in this particular context, which reinforces its originality but makes it virtually impossible to discuss its findings in the light of previous research. Furthermore, and as previously argued, the few studies conducted in Portugal on the quality of the judiciary focus mainly on the quality of judicial decisions and/or procedural efficiency. The very few ones that include any attempt to measure service quality do not use any kind of SERVQUAL/SERVPERF instrument. The Courts Quality Barometer (earlier mentioned) is one of such cases and results are not publicly available, which again limits the possibilities of having a more extensive discussion of our findings.

5. Conclusions

The judiciary system is often regarded by citizens as very complex and opaque. The prevalent idea that the pace of the justice is too slow further contributes to increase distrustful feelings. To overcome this problem, the implementation of frameworks that encourage judiciary actors to be more user-oriented is of great relevance. Measuring service quality and acting upon the results is, in this regard, very significant.
Courts as organizations that deliver services to their users is still rare. Therefore, it is not surprising that very few studies focused on quality management have been carried out in this context. The current research was, to the best of the authors’ knowledge, the first attempt to assess service quality in a court based on an adapted version of the SERVPERF instrument. By collecting data simultaneously from users, magistrates and court officials, the study makes it possible to compare their views on the services provided, which is an additional strength of the approach used.
Findings have shown that the SERVPERF instrument is indeed a valid instrument in this context. Not only do the scales exhibit high consistency, but respondents also felt at ease in answering the questionnaires, expressing a view that the instrument indeed covers a set of dimensions relevant to assess the quality of the services delivered by the court (H1 was thus validated).
Moreover, it was possible to conclude that, despite the criticisms often raised about the functioning of the courts, the perceptions of the participants are rather positive. Users also seem more satisfied than internal stakeholders. Interestingly, tangible items have consensually emerged as the main weaknesses, stressing the importance of modernizing the infrastructures in terms of buildings, furniture, and technological equipment. On the other hand, it becomes evident that people in general trust in the magistrates’ and court officials’ skills and competencies to deliver good services. As such, H2 was only partially rejected. Although user views tend to be more positive than the ones from magistrates and court officials, both groups pinpoint the same dimensions as candidates for priority improvement actions.

6. Limitations of the Study and Future Directions for Research

The main limitation of this research derives from the fact that findings are based on a single case study; as such, general claims should be raised with particular care. Furthermore, it was not possible to discuss the findings in light of the results of other scholars, because there is an evident lack of research on the topic of service quality assessment in the judiciary. Yet, this research has shed some light on the potentialities and difficulties of such assessments, and can be used as a foundation for further studies. In fact, the use of such methodology in a comprehensive way can provide important insights for policies regarding the improvement of judiciary services, as they allow the comparison of different courts, the understanding of different stakeholders’ perspectives, the awareness of regional imbalances, and the definition of learning processes with existing best practices. Furthermore, the fact that the court administrator acknowledged the relevance of the SERVPERF to uncover user/professional views on the quality of the court highlights the possibilities that this type of instrument provides regarding the acquisition of significant information that can then be used to support managerial decisions aimed at improving service quality in the judiciary context.
This study opens some avenues that can be explored in future research. Firstly, it would be interesting to specifically consider the views of different categories of court users by comparing, for instance, the perceptions of victims, witnesses, claimants, and defendants. Lawyers could equally be regarded as a particular category of users since they have inside knowledge on how the courts operate. In the future, it would be interesting to incorporate some new (and more specific) variables (such as the perception of the length and cost of proceedings) and see how they correlate with the service quality dimensions analyzed. Finally, to further validate the SERVPERF model, it would be useful to compare its results with those obtained from the application to the courts of other instruments and frameworks.
Overall, this study can be considered as the first step towards research on service quality in the judiciary, using well-established instruments that have been developed in rather different contexts, but that, if properly adapted, have considerable potential to drive improvement in the judiciary. Indeed, so far, these types of studies seem to be almost inexistent, even if they are particularly relevant toward improving the quality perceived by court users and, consequently, contribute to enhancing citizens’ trust in the judicial system (essential to the quality of democracy as a whole). The importance of promoting the measurement of user satisfaction with the judiciary has been recognized by the European Commission for the Efficiency of Justice (https://www.coe.int/en/web/cepej). Our paper contributes to increasing knowledge in a subject, to date, underexplored, which is especially relevant, since quality, independence, and efficiency are key components for an effective and sustainable justice system.

Author Contributions

Conceptualization, P.M.eS., M.J.R., G.S., C.V., methodology, M.J.R., G.S., C.V., data collection—C.V., writing—original draft preparation, P.M.eS., writing—review and editing, P.M.eS., M.J.R., G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Maria João Rosa would like to acknowledge the contribution of the Centre for Research in Higher Education Policies, supported by the FCT—Portuguese Foundation for Science and Technology, I.P., under project UIDB/00757/2020. Patrícia Moura e Sá would like to acknowledge the contribution of the Research Centre in Political Science (UIDB/CPO/00758/2020), University of Minho, supported by the Portuguese Foundation for Science and Technology (FCT), and the Portuguese Ministry of Education and Science through national funds, to the current research. Gonçalo Santinha would like to acknowledge the contribution of the research unit on Governance, Competitiveness and Public Policy (UIDB/04058/2020), funded by national funds through FCT—Portuguese Foundation for Science and Technology.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Questionnaire items average scores for the three groups of respondents: users, magistrates, and court officials.
Table A1. Questionnaire items average scores for the three groups of respondents: users, magistrates, and court officials.
Service Quality Items
(SERVPERF Instrument)
UsersMagistratesCourt OfficialsKW Test (p)
TANGIBLES
1.1.The court has modern equipment3.192.402.550.006
1.2.The buildings and rooms are visually attractive3.422.102.260.000
1.3.Internal signalization is good3.422.452.480.000
1.4.The court is accessible to disabled people3.632.502.290.000
1.5.The materials (technological equipment and furnishing) are visually attractive 3.002.002.010.000
1.6.
-
Magistrates have a good and professional looking
4.203.803.960.075
-
Court officials have a good and professional looking
4.103.683.860.099
RELIABILITY
2.1.Services are provided according to the law 4.133.803.920.323
2.2.Court officials are helpful 4.193.954.100.476
2.3.Services are performed in a correct and error-free manner 4.033.453.610.012
2.4.The court provides the service within the deadlines established in the law3.633.373.710.327
RESPONSIVENESS
3.1.
-
The (oral and written) language used by magistrates when interacting with users is plain and clear
3.823.303.540.065
-
The (oral and written) language used by court officials when interacting with users is plain and clear
3.933.603.820.216
3.2.
-
Magistrates explain the procedures to the users
3.813.603.470.359
-
Court officials explain the procedures to the users
3.663.653.980.132
3.3.Court officials provide quick assistance to users3.523.684.090.004
3.4.Court officials are always keen to help users3.673.504.000.026
3.5.Court officials are never too busy to answer to users’ requests3.603.313.490.420
3.6.The informatics system of the court is operational*3.213.330.842
ASSURANCE
4.1.
-
Magistrates behave in a way that inspires users’ trust
4.003.83.690.175
-
Court officials behave in a way that inspires users’ trust
3.953.83.890.694
4.2.
-
Users have confidence on the services provided by the court
3.88***
-
Magistrates transmit confidence to the users
*3.753.710.765
-
Court Officials transmit confidence to the users
*3.753.840.606
4.3.
-
Magistrates behave in a way that is friendly to users
4.054.153.840.175
-
Court officials behave in a way that is friendly to users
4.144.154.080.855
4.4.
-
Magistrates have the necessary skills and competencies to answer users’ requests
3.974.304.220.454
-
Court officials have the necessary skills and competencies to answer users’ requests
3.904.103.970.545
EMPATHY
5.1.
-
Magistrates give individual attention to users
3.463.703.260.263
-
Court officials give individual attention to users
3.603.853.910.390
5.2.
-
The court opening hours are convenient to users
3.833.803.320.035
5.3.
-
Magistrates deal with users with courtesy
3.943.943.850.341
-
Court officials deal with users with courtesy
4.084.083.950.755
5.4
-
Court officials show a genuine interest in helping users
3.873.953.970.842
5.5.
-
Magistrates understand the specific needs of users
3.613.953.550.346
-
Court officials understand the specific needs of users
3.824.003.930.664
* Question was not asked to this group of respondents.

References

  1. Engdaw, B. The impact of Quality Public Service Delivery on Customer Satisfaction in Bahir Dar City Administration: The Case of Ginbot 20 Sub-city. Int. J. Public Adm. 2020, 43, 644–654. [Google Scholar] [CrossRef]
  2. European Commission. Quality of Public Administration: A Toolbox for Practitioners; Publications Office of the European Union: Luxembourg, 2017. [Google Scholar]
  3. Moura e Sá, P.; Albuquerque, A. Translating the EFQM model into the courts. Int. J. Qual. Serv. Sci. 2015, 7, 230–244. [Google Scholar]
  4. Aikman, A. TQM in the courts: Maybe so, maybe not. Int. J. Public Adm. 1996, 19, 1865–1890. [Google Scholar] [CrossRef]
  5. Capaldo, G.; Constantino, N.; Pellegrino, R.; Rippa, P. The Role of Risk in Improving Goal Setting in Performance Measurement Practices within Public Sector: An Explorative Research in Courts Offices in Italy. Int. J. Public Adm. 2018, 41, 986–997. [Google Scholar] [CrossRef]
  6. Lopes, J.; Matos, J.; Mendes, L.; Coelho, N. Manual de Gestão Judicial; Edições Almedina: Coimbra, Portugal, 2015. [Google Scholar]
  7. Gronroos, C. A Service Quality Model and Its Marketing Implications. Eur. J. Mark. 1984, 18, 36–44. [Google Scholar] [CrossRef]
  8. Al-Enezi, A. Assessment of the Service Quality Provided by the Kuwaiti Cultural Office in Cairo: An Empirical Investigation. Int. J. Public Adm. 2012, 35, 643–655. [Google Scholar] [CrossRef]
  9. Parasuraman, A.; Zeithaml, V.A.; Berry, L.L. A Conceptual Model of Service Quality and its Implication for Future Research (SERVQUAL). J. Mark. 1985, 49, 41–50. [Google Scholar] [CrossRef]
  10. Parasuraman, A.; Zeithaml, V.A.; Berry, L.L. SERVQUAL: A multiple-Item Scale for measuring consumer perceptions of service quality. J. Retail. 1988, 64, 12–40. [Google Scholar]
  11. Cronin, J.J.; Taylor, S. Measuring Service Quality—A Reexamination and Extension. J. Mark. 1992, 56, 55–68. [Google Scholar] [CrossRef]
  12. Amaral, D.F. Curso de Direito Administrativo; 4.ª Edição; Livraria Almedina: Coimbra, Portugal, 2015; Volume I. [Google Scholar]
  13. Aikman, A. Total Quality Management in the Courts: A Handbook for Judicial Policy Makers and Administrators; National Center for State Courts: Williamsburg, VA, USA, 1994. [Google Scholar]
  14. Mintzberg, H. Structure in Fives: Designing Effective Organizations; Prentice Hall: London, UK, 1983. [Google Scholar]
  15. Clarke, J.; Newman, J. The Managerial State; Sage Publications: London, UK, 1997. [Google Scholar]
  16. Maayan, E.; Ronen, B.; Coman, A. Assessing the Performance of a Court System: A Comprehensive Performance Measures Approach. Int. J. Public Adm. 2012, 35, 729–738. [Google Scholar] [CrossRef]
  17. Hood, C. A Public Management for All Seasons? Public Adm. 1991, 69, 3–20. [Google Scholar] [CrossRef]
  18. Pollitt, C. Justification by Works or by Faith? Evaluating the New Public Management. Evaluation 1995, 1, 133–154. [Google Scholar] [CrossRef]
  19. Aberbach, J.D.; Christensen, T. Citizens and Consumers. Public Manag. Rev. 2005, 7, 225–246. [Google Scholar] [CrossRef]
  20. Vries, M.; Nemec, J. Public sector reform: An overview of recent literature and research on NPM and alternative paths. Int. J. Public Sect. Manag. 2013, 26, 4–16. [Google Scholar] [CrossRef] [Green Version]
  21. Albers, P. The assessment of court quality: Hype or global trend? Hague J. Rule Law 2009, 20, 53–60. [Google Scholar] [CrossRef]
  22. International Consortium For Court Excellence. The International Framework for Court Excellence, 2nd ed.; The National Centre for Court Excellence: Williamsburg, NY, USA, 2013. [Google Scholar]
  23. Contini, F.; Carnevali, D. The quality of justice in Europe: Conflicts, dialogue and politics. In Working Paper of the Research Institute on Judicial Systems of the Italian National Research Council; Italian National Research Council: Rome, Italy, 2010. [Google Scholar]
  24. Rocha, J.O.; Araújo, J. Administrative reform in Portugal: Problems and prospects. Int. Rev. Adm. Sci. 2007, 73, 583–596. [Google Scholar] [CrossRef]
  25. Zuniga, R.; Murillo, R. Draining the judiciary bottleneck: A quasi-experiment in improving a government service. J. Bus. Res. 2014, 67, 1267–1276. [Google Scholar] [CrossRef]
  26. Hines, P.; Martins, A.; Beale, J. Testing the Boundaries of Lean Thinking: Observations from the Legal Public Sector. Public Money Manag. 2008, 28, 35–40. [Google Scholar]
  27. Martins, A.L.; Carvalho, J.C. The court system supply chain and its dynamics. In Proceedings of the 9th International Symposium on Logistics, Bangalore, India, 11–14 July 2004; pp. 29–33. [Google Scholar]
  28. Wisniewski, M. Perspectives Using SERVQUAL to assess customer satisfaction with public sector services. Manag. Serv. Qual. 2001, 11, 380–388. [Google Scholar] [CrossRef]
  29. Orwig, R.A.; Pearson, J.; Cochran, D.A.N. An Empirical Investigation into the Validity of Servqual in The Public Sector. Public Adm. Q. 2016, 21, 54–68. [Google Scholar]
  30. Mbassi, J.C.; Mbarga, A.D.; Ndeme, R.N. Public Service Quality and Citizen-Client’s Satisfaction in Local Municipalities. J. Mark. Dev. Compet. 2019, 13, 110–123. [Google Scholar]
  31. Abdullah, F. Measuring service quality in higher education: HEdPERF versus SERVPERF’. Mark. Intell. Plan. 2006, 24, 31–47. [Google Scholar] [CrossRef] [Green Version]
  32. Melo, A.I.; Santinha, G.I.; Lima, R.I. Measuring the Quality of Health Services Using SERVQUAL. In Handbook of Research on Modernization and Accountability in Public Sector Management; Azevedo, G., Oliveira, J., Marques, R., Ferreira, A., Eds.; IGI Global: Hershey, PA, USA, 2018; pp. 300–318. [Google Scholar]
  33. Ladhari, R. A review of twenty years of SERVQUAL research. Int. J. Qual. Serv. Sci. 2009, 1, 172–198. [Google Scholar] [CrossRef]
  34. Zeithaml, V.A.; Berry, L.L.; Parasuraman, A. Communication and Control Processes in the Delivery of Service Quality. J. Mark. 1988, 52, 35–48. [Google Scholar] [CrossRef]
  35. Robledo, M.A. Measuring and managing service quality: Integrating customer expectations. Manag. Serv. Qual. Int. J. 2001, 11, 22–31. [Google Scholar] [CrossRef]
  36. Oliver, R. A Cognitive Model of the Antecedents and Consequences of Satisfaction Decisions. J. Mark. Res. 1980, 17, 460–469. [Google Scholar] [CrossRef]
  37. Bennington, L.; Cummane, J. Measuring service quality: A hybrid methodology. Total Qual. Manag. 1998, 9, 395–405. [Google Scholar] [CrossRef]
  38. Akdere, M.; Top, M.; Tekingündüz, S. Examining patient perceptions of service quality in Turkish hospitals: The SERVPERF model. Total Qual. Manag. Bus. Excell. 2018, 31, 342–352. [Google Scholar] [CrossRef]
  39. Teas, R.K. Consumer Expectations and the Measurement of Perceived Service Quality. J. Prof. Serv. Mark. 1993, 8, 33–54. [Google Scholar]
  40. Khalaf, M.A.; Khourshed, N. Performance-based service quality model in postgraduate education. Int. J. Qual. Reliab. Manag. 2017, 34, 626–648. [Google Scholar] [CrossRef]
Table 1. Sample features and response rate.
Table 1. Sample features and response rate.
MagistratesCourt OfficialsUsers
Administered questionnaires55123n.a.
Successfully returned23
(Response rate: 42%)
92
(Response rate 75%)
43
Gender
Females16 (70%)60 (66%)17 (40%)
Males7 (30%)31 (34%)26 (60%)
Education level
Basic--5 (12%)
Secondary-63 (68%)19 (44%)
Licentiate degree21 (91%)23 (25%)14 (33%)
Master degree or higher2 (9%)3 (3%)5 (11%)
Age
<30 years old--7 (16%)
Between 30 and 60 years old23 (100%)63 (68%)28 (67%)
>60 years old-5 (5%)3 (7%)
Employedn.a.n.a.28 (65%)
First contactn.a.n.a.15 (35%)
Role
Litigantn.a.n.a.19 (44%)
Witness7 (16%)
Other16 (37%)
Table 2. Scales consistency.
Table 2. Scales consistency.
Service Quality DimensionNo. of ItemsCronbach’s Alpha
Tangibles70.867
Reliability40.790
Responsiveness80.865
Assurance90.953
Empathy80.929
Total (questionnaire)360.964
Table 3. Service quality measurement results.
Table 3. Service quality measurement results.
Service Quality DimensionsUsersInternal ViewsKW Test (p)
MagistratesOfficialsGlobal
Tangibles3.472.532.582.570.000
Reliability3.993.643.853.810.243
Responsiveness3.683.453.723.670.171
Assurance3.973.983.913.920.820
Empathy3.803.883.703.730.663
Table 4. Top five best and worst perceived items (users’ assessment).
Table 4. Top five best and worst perceived items (users’ assessment).
Top FiveBottom Five
ItemScoreItemScore
The court magistrates have a good and professional appearance (tangibles)4.20The court technological equipment and furniture are visually attractive (tangibles)3.00
The court officials are helpful (reliability)4.19The court has modern equipment (tangibles) 3.19
The court officials are friendly when dealing with users (assurance)4.14The court premises are attractive (tangibles)3.42
Services are provided according to the law (reliability)4.13Signage in the court is good (tangibles)3.42
The court officials have a good and professional appearance (tangibles)4.10The court officials provide a quick service to the users (responsiveness)3.52
Table 5. Top five best and worst perceived items (magistrates’ assessment).
Table 5. Top five best and worst perceived items (magistrates’ assessment).
Top FiveBottom Five
ItemScoreItemScore
The court magistrates have the necessary skills and competencies to answer to users’ demands (assurance)4.30The court technological equipment and furniture are visually attractive (tangibles)2.00
The court magistrates are friendly when dealing with users (assurance)4.15The court technological equipment and furniture are visually attractive (tangibles)2.10
The court officials are friendly with dealing with users (assurance)4.15The court has modern equipment (tangibles) 2.40
The court officials have the necessary skills and competencies to answer to users’ demands (assurance)4.10Signage in the court is good (tangibles)2.45
The court officials understand the individual needs of the users (empathy)4.00The court premises are accessible to disabled citizens (tangibles)2.50
Table 6. Top Five best and worst perceived items (court officials’ assessment).
Table 6. Top Five best and worst perceived items (court officials’ assessment).
Top FiveBottom Five
ItemScoreItemScore
The court magistrates have the necessary skills and competencies to answer to users’ demands (assurance)4.22The court technological equipment and furniture are visually attractive (tangibles)2.01
The court officials are helpful (reliability)4.10The court premises are attractive (tangibles)2–26
The court officials provide a quick service to the users (responsiveness)4.09The court premises are accessible to disabled citizens (tangibles)2.29
The court officials are friendly with dealing with users (assurance)4.08Signage in the court is good (tangibles)2.48
The court officials are caring and attentive to the users (empathy)4.02The court has modern equipment (tangibles) 2.55
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moura e Sá, P.; Rosa, M.J.; Santinha, G.; Valente, C. Quality Assessment of the Services Delivered by a Court, Based on the Perceptions of Users, Magistrates, and Court Officials. Sustainability 2021, 13, 504. https://doi.org/10.3390/su13020504

AMA Style

Moura e Sá P, Rosa MJ, Santinha G, Valente C. Quality Assessment of the Services Delivered by a Court, Based on the Perceptions of Users, Magistrates, and Court Officials. Sustainability. 2021; 13(2):504. https://doi.org/10.3390/su13020504

Chicago/Turabian Style

Moura e Sá, Patrícia, Maria João Rosa, Gonçalo Santinha, and Cátia Valente. 2021. "Quality Assessment of the Services Delivered by a Court, Based on the Perceptions of Users, Magistrates, and Court Officials" Sustainability 13, no. 2: 504. https://doi.org/10.3390/su13020504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop