Next Article in Journal
Corporate Sustainability Reporting and Its Influence on Brand Value: A Sectoral Analysis of Top Brands in an Emerging Market
Next Article in Special Issue
The Cascade of Exclusion: A Mixed-Methods Study of Welfare Inequity and Its Foundational Determinants Among Thailand’s Homeless Population
Previous Article in Journal
Recent Advances in Sustainable Anthocyanin Applications in Food Preservation and Monitoring: A Review
Previous Article in Special Issue
Towards Sustainable Cities—Selected Issues for Pro-Environmental Mass Timber Tall Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Cities and Quality of Life: A Multi-Criteria Approach for Evaluating Perceived Satisfaction with Public Administration

1
Faculty of Computer Science, Bialystok University of Technology, Wiejska 45A, 15-351 Bialystok, Poland
2
Department of Operations Research, University of Economics in Katowice, 1 Maja 50, 40-287 Katowice, Poland
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(22), 10106; https://doi.org/10.3390/su172210106
Submission received: 21 September 2025 / Revised: 28 October 2025 / Accepted: 3 November 2025 / Published: 12 November 2025
(This article belongs to the Special Issue Quality of Life in the Context of Sustainable Development)

Abstract

This study assesses the quality of local public administration in European cities using an analytical algorithm based on the B-TOPSIS approach. It draws on the Quality of Life in European Cities survey, which includes five questions on citizens’ satisfaction with local administration, rated on a simplified four-point verbal scale with an option to skip. To process this type of group data, the study extends B-TOPSIS to handle ordinal scales, uncertainty, and missing responses. The method is applied to data from 2023 and compared with 2019 to detect temporal changes in satisfaction. The framework compensates for incomplete information, integrates a Monte Carlo-based protocol for robust results, enhances the ranking through almost first-order stochastic dominance, and supports cross-survey comparison. The results show that Zurich, Luxembourg, and Antalya rank highest in satisfaction, while Rome and Palermo rank lowest. Residents of medium-sized and very large cities report higher satisfaction, with EU and EFTA cities outperforming those in the Western Balkans. Overall, satisfaction levels have remained stable since 2019. These findings offer both methodological contributions and practical insights into governance quality and sustainability, constructing a unified performance index from dispersed survey responses.

1. Introduction

The United Nations Sustainable Development Goals (SDGs) aim to create a more inclusive, resilient, and sustainable future [1,2]. Attaining these goals is expected to bring lasting improvements in the quality of life (QoL) for both present and future generations [3,4]. Among them, Goal 11: Sustainable Cities and Communities focuses on improving urban livability through enhanced infrastructure, transportation, environmental sustainability, and governance [5,6]. Achieving these objectives requires effective governance and high-quality public administration, as they directly influence economic development, social cohesion, and public trust in institutions.
One of the most important components of urban sustainability is subjective quality of life [7], which reflects residents’ perceptions of well-being, satisfaction with public services, and trust in local governance. Monitoring SDG 11 in the EU context involves tracking progress in urban quality of life, sustainable mobility, and the reduction in negative environmental impacts [8]. Well-functioning local administration—characterized by transparency, accountability, adherence to the rule of law, and sound governance structures—plays a crucial role in shaping economic development and guiding public investments, including cohesion policy.
The concept of smart cities further underscores the growing importance of local governance. While it centers on integrating technology and data-driven solutions to improve urban life [9,10,11], technological innovation alone is not sufficient. The success of smart cities depends equally on the effectiveness of local public administration. Administrative efficiency, transparency, accessibility, and fairness shape everyday citizen experiences and foster trust in local institutions [12]. Thus, sustainable governance requires not only technological progress but also well-functioning public administration that facilitates citizen engagement, minimizes bureaucratic burdens, and promotes inclusive decision-making.
Given this context, the assessment of local public administration quality becomes an essential component in monitoring progress toward the SDGs. However, evaluating administrative performance presents a methodological challenge, as governance quality can be measured through both objective and subjective indicators. Objective indicators often rely on economic or institutional performance metrics—such as efficiency in resource allocation, fiscal stability, or the Quality of Government (QoG) indices capturing corruption control, rule of law, and government effectiveness. From the perspective of sustainable development, subjective evaluations—reflecting residents’ satisfaction with local services and their trust in public institutions—are equally important, as they capture the social dimension of governance performance. Balancing these two perspectives provides a more comprehensive understanding of administrative quality and its contribution to urban sustainability.
The relationship between governance quality and socio-economic outcomes has been widely discussed in the literature. Acemoglu et al. [13] provide both empirical and theoretical evidence that differences in economic development stem largely from institutional disparities, while Rodríguez-Pose [14] highlights the importance of institutions in regional development and their integration into policy strategies.
Within the European policy framework, the survey on the Quality of Life in European Cities offers a valuable empirical basis for assessing governance performance from a citizen-centered perspective. The latest two editions, 2023 edition (6th edition) and 2019 edition (5th edition), conducted by IPSOS for the European Commission [12,15], includes five questions assessing satisfaction with local public administration, measured on a five-point scale (from “very satisfied” to “very unsatisfied”), with an additional “Don’t know/No answer/Refused” option. The official Report on the Quality of Life in European Cities [12,16] simplifies responses by merging satisfaction categories and excluding uncertain responses, which, although convenient, introduces methodological limitations. Such simplifications reduce the accuracy of distribution assessments and disregard the uncertainty inherently present in survey data.
To address these challenges, composite indicators based on multi-criteria decision-making (MCDM) methods are particularly suitable, as they aggregate diverse information and capture the complexity of social phenomena such as sustainability [17,18,19]. Previous studies have applied a range of MCDM methods to evaluate SDG 11 in the EU context, including Hellwig’s method [20], DARIA-TOPSIS (Data Variability Assessment Technique for Order of Preference by Similarity to Ideal Solution) [21], Temporal VIKOR [22], modified TOPSIS and BIPOLAR methods [23], the MIDIA method [24], and the European Development Scoreboard [25]. However, despite their robustness, these methods are primarily designed for continuous, quantitative data and, in our analytical context, tend to rely on objective indicators that emphasize measurable environmental, infrastructural, and economic aspects. They overlook residents’ subjective perceptions, which are crucial for understanding urban livability and governance performance but are ordinal in nature and often include uncertain or missing responses. While effective for statistical datasets, these approaches (or any classic MCDM techniques) are unsuitable for survey-based Quality of Life (QoL) data, since they make it impossible to construct the standard decision matrix on which those methods operate. Additionally, the necessity of incorporating a wide range of opinions from numerous respondents in the context of ordinal and incomplete data poses further challenges for applying classical MCDM methods. Their hybridization with consensus-seeking procedures typical of group decision-making becomes problematic, as traditional consensus cost analysis cannot be directly applied to individual ordinal data. This limitation complicates efforts to reconcile diverse subjective evaluations and to achieve coherent aggregation of group preferences within the survey-based analytical framework.
To accommodate these properties, researchers, such as Jefmański [26], Jefmański et al. [27], Kusterka-Jefmańska [28,29], and Roszkowska [30], have developed methods based on intuitionistic fuzzy sets, including the Intuitionistic Fuzzy Synthetic Measure (IFSM) and Intuitionistic Fuzzy TOPSIS [31]. These techniques transform ordinal responses into intuitionistic fuzzy sets, allowing for more nuanced assessments than traditional crisp measures. While they account for uncertainty as a distinct category, they do not differentiate between varying levels of satisfaction, such as “very satisfied” versus “rather satisfied” or “very unsatisfied” versus “rather unsatisfied”. Nevertheless, these methods rely on subjective decisions concerning the choice of distance measures (e.g., Euclidean or Hamming), parameter settings (two- or three-parameter models), and the construction of reference objects, which may negatively affect reproducibility and comparability.
More recently, Roszkowska and Wachowicz [32] developed the B-TOPSIS (Belief Structure Technique for Order of Preference by Similarity to Ideal Solution) method, based on Belief Structure, to analyze survey data. This method, grounded in Dempster–Shafer theory, improves the distinction between linguistic-scale evaluations, assesses response distributions, addresses missing values, and allows for sensitivity analysis to validate the stability of rankings. It integrates uncertain responses directly into the analysis, making it particularly useful for capturing response distributions and addressing the inherent uncertainties in survey data. This framework has been applied to the 2023 Quality of Life in European Cities to evaluate residents’ satisfaction with various urban factors, including public transport, healthcare, cultural facilities, green spaces, education, air quality, noise levels, and cleanliness. Despite successful implementation, B-TOPSIS revealed some shortcomings, particularly concerning the diverse options for handling missing data, issues with analyzing various utility functions used to evaluate survey answers, difficulties in deriving consistent and easily interpretable results using classical sensitivity analysis, and a lack of solutions for longitudinal analysis (or temporal comparison of results across different survey waves).
Summarizing the above discussion, we can clearly state that existing practices for analyzing Quality of Life in European Cities survey data—and their inherent limitations—reveal a distinct methodological gap: the need for robust approaches that produce interpretable synthetic measures capable of fully exploiting survey data, preserving the ordinal structure of responses, integrating uncertainty and missing data, and enabling meaningful temporal comparisons across cities and over time. Therefore, this study aims to extend the B-TOPSIS approach to assess subjective satisfaction with local administrations in selected European cities, using data from the most recent 2023 Quality of Life in European Cities survey. The goal of this work is, hence, twofold: methodological and empirical. The methodological goal is to propose an original analytical approach, stemming from the answers to questions addressing the limitations of the original B-TOPSIS approach raised in [32]:
  • MQ1: How can missing survey data be incorporated into B-TOPSIS to better handle the resulting uncertainty?
  • MQ2: How can the B-TOPSIS framework be extended to ensure robust and reliable comparison of simulated results?
  • MQ3: How can the B-TOPSIS result be organized to enable cross-survey and temporal comparisons of subjective assessments while maintaining methodological consistency across different data collections?
In this approach, we incorporate the proportional recalculation of missing answers within a Monte Carlo–based robustness protocol to evaluate the stability of ratings and the resulting rankings and enrich the evaluation stage through Almost First-Order Stochastic Dominance (AFSD), which enables a more accurate estimation of the performance distributions of the analyzed cities while preserving the ability to identify broader dominance relationships among them. Although the use of fixed ideal and anti-ideal solutions, independent of survey data, is a feature of the original B-TOPSIS that makes the ratings it produces easily comparable across different time points, the present study extends the analytical capacity of B-TOPSIS to longitudinal data analysis by introducing mechanisms for classifying alternatives and assessing changes in their performance not solely based on updated rankings and associated scores, but rather through their relationships to externally defined quality category boundaries. The proposed framework is then applied to compare changes in citizens’ evaluations of local administration between the two most recent survey editions, conducted in 2019 and 2023, which use the same set of questions.
Against this backdrop, the empirical part of the study focuses on applying this enhanced framework to survey data on citizens’ perceptions of local administration performance and formulating the descriptive conclusions. The aim is to reveal cross-city and temporal differences in satisfaction and to identify patterns that may reflect broader trends in urban governance quality. Accordingly, the following research questions have been formulated.
  • Q1: How do European cities rank in terms of perceived satisfaction with local public administration in 2023?
  • Q2: What are the key differences in perceived satisfaction with public administration across European cities in 2023?
  • Q3: Does city size influence perceived satisfaction with public administration in European cities in 2023?
  • Q4: Are there inter-regional variations in perceived satisfaction with public administration across European cities in 2023?
  • Q5: Are there intra-country differences in perceived satisfaction with public administration across European cities in 2023?
  • Q6: How has the level of satisfaction with local administration changed in the surveyed cities between 2019 and 2023?
By proposing an extension of the classic B-TOPSIS method, this paper makes the following contributions:
  • It introduces a novel robustness protocol to the B-TOPSIS method based on Monte Carlo simulation and enriches the rating and ranking stage through the use of Almost First-Order Stochastic Dominance (AFSD), which summarizes the results obtained under an assumed set of reliable ordinal value functions.
  • It offers an additional analytical phase that allows for the verification of relationships between ratings obtained from different surveys, enabling longitudinal (temporal) analyses of changes in satisfaction with local administration.
  • It applies the extended procedure to evaluate the quality of local public administration in European cities by aggregating indicators of efficiency, transparency, cost, digital accessibility, and corruption.
  • It empirically tests the procedure by ranking European cities, analyzing differences in satisfaction across city sizes, regions, and countries, and comparing changes between two time periods.
The remainder of this paper is structured as follows. Section 2 presents the data sources related to the Quality of Life in European Cities survey and the methodological framework, including a detailed description of the adopted and extended B-TOPSIS method. Section 3 reports and discusses the empirical results and addresses the research questions. Finally, Section 4 concludes with a summary of key findings and directions for future research.

2. Materials and Methods

2.1. Materials

The Quality of Life in European Cities survey, prepared by IPSOS for the European Commission’s Directorate-General for Regional and Urban Policy, is designed to assess various aspects of urban life across Europe, and, to date, six editions have been conducted. The most recent two editions, the fifth (2019) and sixth (2023), specifically included five questions focused on the quality of local public administration, enabling comparative analyses of citizens’ satisfaction with municipal services over time.
The sixth edition of the Report on the Quality of Life in European Cities [12] was conducted between January and April 2023. The study encompassed 83 cities across the European Union (EU) and European Free Trade Association (EFTA), the United Kingdom, the Western Balkans, and Turkey. The accompanying Survey technical report [16] provides an in-depth overview of the applied methodology, covering questionnaire design, sampling, data collection and processing, weighting, and quality assurance procedures. According to this technical documentation, the sampling framework was designed to achieve both geographical balance and comparability across Europe. All EU capitals were included (except Switzerland), and in larger countries, one to six additional cities were selected to reflect regional diversity. The final selection maintained a balance between territorial coverage, city size, and function, and continuity with previous survey waves (2015 and 2019). Although not every European country participated, the chosen sample provides a coherent and representative portrait of urban life across Europe.
Respondents were asked to evaluate their satisfaction with different aspects of urban life, such as inclusivity, social isolation, employment opportunities, safety, housing, environmental conditions, transportation, cultural offerings, city services, and levels of corruption, local administration. In each city, at least 839 residents were interviewed, resulting in a total of 71,153 completed interviews [12,16].
The sociodemographic characteristics of respondents are presented in Table 1.
The group includes slightly more females (52.88%) than males (47.12%). The largest age groups are 25–34 (17.55%) and 35–44 (16.63%), while the smallest are 15–19 (5.04%) and 75+ (7.96%). Most respondents have completed upper secondary education (37.97%), followed by lower secondary (15.22%) and a bachelor’s degree (14.75%), with few having less than primary (0.30%) or doctoral education (1.65%).
In the area of satisfaction with local administration, respondents were asked to share their views on five aspects. The question posed was: “To what extent are you satisfied or dissatisfied with each of the following in your city?” Participants could choose from the following response options: 1—Strongly Disagree; 2—Somewhat Disagree; 3—Somewhat Agree; 4—Strongly Agree; 99—Don’t Know/No Answer/Refused. The five criteria (C1–C5) corresponding to questions Q1–Q5 are shown in Table 2. They reflect all aspects of local public administration assessed in the survey and serve to measure citizens’ satisfaction across the various dimensions included in the Quality of Life in European Cities survey.
The further analysis excluded Tirana due to the lack of data regarding the C5 criterion.
The criteria C1–C5 are crucial factors in understanding the quality of local governance and its impact on citizens’ lives. Efficiency refers to the time required to resolve administrative requests, with the research highlighting its importance in ensuring timely and effective government services. Efficient processes help reduce waiting times, improve citizen satisfaction, and enhance the overall image of local government. Technology plays a key role in improving efficiency, with digital platforms allowing citizens to submit requests and track their status online [33].
Transparency relates to the clarity and simplicity of administrative procedures, as well as the openness of local institutions in their operations. Transparency is a fundamental principle of good governance and plays a key role in building trust in local administration. Its absence can lead to diminished citizens’ trust in the administration, as shown by Kim and Lee [35], who examine how electronic participation (e-participation) can influence trust in local government. Their study reveals that satisfaction with e-participation applications and government responsiveness positively impacts perceptions of decision-making influence, government transparency, and overall trust in local government.
Del Sol [36] examines the economic, social, and institutional determinants of local government transparency in Spain. The study expands the traditional fiscal focus by including corporate, social, contracting, and planning transparency indexes. Results show that large municipalities and left-wing mayors report better transparency, while the worst results are seen in provincial capitals, tourist cities, and mayors with an absolute majority.
Costs refer to the assessment of fees charged by local authorities for services. This factor is crucial in evaluating the affordability of public services such as waste collection, parking permits, public transportation, or administrative processing of permits and licenses. The assessment of these fees is essential for determining the affordability of public services and understanding their impact on citizens’ satisfaction [37]. High fees or unexpected costs can lead to dissatisfaction among residents, especially in communities with lower-income populations. If public services are perceived as too expensive, it can undermine trust in local authorities and create a sense of unfairness, particularly if citizens feel they are paying for services that do not meet their expectations or needs. On the other hand, reasonable and transparent pricing for services contributes to higher satisfaction, as residents feel they are receiving value for their money.
Digital accessibility emphasizes the ease of accessing public services online, which improves the quality of life for city residents by simplifying administrative procedures and reducing bureaucratic burdens. The implementation of e-Administration facilitates easy access to documents, eliminates the need for in-person visits, and generates cost reductions through more efficient technologies and streamlined processes [46]. As noted by Frías-Aceituno et al. [38], e-government development across 102 Spanish municipalities, focusing on transparency, online services, and citizen participation, shows that municipality size and political ideology are key drivers in advancing digital accessibility. The COVID-19 pandemic has sped up the transition to digital interactions, proving that ICT-based communication with public administration is not only effective but also crucial for modern governance [39].
Trust in local administration significantly impacts life satisfaction. Citizens who trust local institutions, such as municipal authorities and social organizations, are more likely to engage in political and community activities, view the government as legitimate, and contribute to social cohesion [34,45,47]. Dinesen and Sønderskov [45] review the literature on the relationship between the quality of government and generalized social trust, finding strong support for a positive link between institutional quality and social trust, particularly at the individual level. Habibov’s [40] study further emphasizes that institutional trust, along with economic perceptions, is a key predictor of life satisfaction, remaining stable even during the 2008 Global Financial Crisis. His findings suggest that policymakers should focus on reforms to strengthen trust in institutions. Additionally, the study by de Vries and Sobis [41] explores the hypothesis that residents of capital cities have less trust in local administration than those in non-capital cities, with their analysis confirming this relationship even after controlling for factors such as public issue satisfaction, region, and poverty. Ziller and Andréß [34] examine the link between the quality, efficiency, and fairness of local public services and social trust, using multilevel models on data from the Quality of Life in European Cities project. Their study reveals that improvements in local public service quality are strongly associated with higher social trust, especially in areas like sports and leisure facilities and the condition of public spaces, streets, and buildings.
Corruption poses a significant threat to institutional trust, fostering skepticism toward public authorities and reducing civic engagement. Moreover, it obstructs the effective functioning of local governments by hindering fair and efficient resource allocation and limiting economic growth [42,43,44]. When corruption is prevalent, citizens may perceive public institutions as self-serving rather than working for the common good. This perception weakens confidence in local authorities and discourages participation in political and community activities. Additionally, corruption can lead to inefficiencies in public service delivery, reinforcing negative public sentiment and further eroding institutional trust.
Figure 1 presents a part of the Quality of Life in European Cities aggregated survey data assignment sheet available on the website Inforegio-Quality of Life in European Cities [16] for question Q1.
It is worth noting that for questions Q1–Q4, the response options can be interpreted as the “satisfaction scale” in the following way: 1—Strongly Disagree corresponds to “very unsatisfied”, 2—Somewhat Disagree to “rather unsatisfied”, 3—Somewhat Agree to “rather satisfied”, and 4—Strongly Agree to “very satisfied”. However, for question Q5, the interpretation is reversed: 1—Strongly Disagree corresponds to “very satisfied”, 2—Somewhat Disagree to “rather satisfied”, 3—Somewhat Agree to “rather unsatisfied”, and 4—Strongly Agree to “very unsatisfied”. An example of transformation response options for the “satisfaction scale” for Zürich is presented in Table 3.
According to Table 3, in Zürich, 2.83% of respondents reported being “very unsatisfied” with the efficiency of local administration (C1), 11.75% were “rather unsatisfied,” 42.79% were “rather satisfied,” 27.74% were “very satisfied,” and 14.89% either responded “don’t know,” provided no answer, or refused to respond.
Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 present the distribution of responses for the questions Q1–Q5 using boxplots. These visualizations offer a comprehensive summary of important statistics, including means, quartiles, and any potential outliers.
On average, most respondents rated local administration efficiency as “rather satisfied” (31.6%), followed by “rather unsatisfied” (22.7%), with similar shares for “very satisfied” (17.3%) and “very unsatisfied” (18.4%). About 10% chose “Don’t know/No Answer/Refused” (category 99). Satisfaction was highest in Antalya (35.9%) and lowest in Skopje (52.2% dissatisfied), while the share of category 99 responses ranged from 1.4% in Lefkosia to 26.1% in Groningen.
The distribution of responses for the clarity and simplicity of administrative procedures shows notable differences from the previous question on administrative efficiency. Most respondents were “rather satisfied” (34.21%), followed by “rather unsatisfied” (24.65%), while “very satisfied” (19.51%) and “very unsatisfied” (15.12%) were similar. Only 6.52% selected “Don’t know/No Answer/Refused,” slightly lower than before. Satisfaction varied by location, highest in Antalya (47.22%) and lowest in Belgrade (34.91% dissatisfied), with category 99 ranging from 0.46% in Brussels to 17.68% in Tallinn.
The distribution of responses regarding local administration costs mirrors that of previous questions on administrative efficiency and procedure transparency. Most respondents were “rather satisfied” (36.14%), followed by “rather unsatisfied” (22.79%), while “very satisfied” (17.12%) and “very unsatisfied” (17.09%) were nearly equal. Category 99 (“Don’t know/No Answer/Refused”) accounted for 6.86%. Satisfaction varied by location, highest in Cluj-Napoca (37.67% “very satisfied”) and lowest in Heraklion (47.79% “very dissatisfied”), with category 99 ranging from 1.34% in Antwerp to 20.54% in Tallinn.
Responses on digital accessibility show notable differences compared to previous aspects of local administration. Most respondents were “rather satisfied” (38.18%), with a relatively high share of “very satisfied” (29.43%). Dissatisfaction was lower, with 14.86% “rather unsatisfied” and 8.52% “very unsatisfied.” Category 99 accounted for 9.01%. Satisfaction varied by location, highest in Ankara (49.14% “very satisfied”) and lowest in Diyarbakır (20.75% “very dissatisfied”), with category 99 ranging from 2.30% in Luxembourg to 25.48% in Piatra Neamţ.
Responses on trust in local administration differ from previous questions on efficiency, clarity, and costs. Most respondents were “rather unsatisfied” (22.92%) or “rather satisfied” (20.08%), with “very unsatisfied” (18.81%) slightly exceeding “very satisfied” (17.12%). Category 99 (“Don’t know/No Answer/Refused”) was notably higher at 21.06%. Satisfaction was highest in Copenhagen (51.75% “very satisfied”) and lowest in Skopje (58.73% dissatisfied), with category 99 ranging from 4.14% in Lefkosia to 39.16% in Leipzig, reflecting significant variation in perceptions of trust and corruption.

2.2. Methods

The B-TOPSIS procedure, grounded in the Belief Structure (BS) and adapted for the evaluation of survey data, involves several key steps for assessing alternatives in a multi-criteria decision-making context [32]. In this study, B-TOPSIS is employed as part of a broader analytical protocol specifically designed to address our research questions. This protocol introduces additional steps 8–10 to the standard B-TOPSIS framework. These steps extend the original framework by incorporating an additional stage aimed at enhancing robustness through the replication of analyses for various value functions within a Monte Carlo simulation framework, and using the simulated data to rank and cluster alternatives. This protocol also modifies or extends certain existing steps. Specifically, step 2 uses an approach based on proportions of potentials to suggest the treatment of missing data. Furthermore, step 4 provides recommendations regarding how weights should be assigned to items when calculating the multi-criteria distances.
The proposed protocol consists of the following nine steps:
  • Step 1. Formulate the problem and construct the Belief Decision Matrix: Define the alternatives and criteria, then create a decision matrix using the BS model to represent respondent evaluations.
For the problem under consideration, define a set of objects O = { O 1 , O 2 , , O m } evaluated by respondents based on various criteria C = C 1 , C 2 , , C n . Assume they use an ordinal scale with N categories H 1 , H 2 , , H N , where H k is more preferred than H k + 1 , with an additional option for “Don’t know/No Answer/Refuses.” The evaluation of object O i for criterion C j follows a Belief Structure (BS) model:
B S S i j = H k ; β i j k ; k = 1 , , N .
The BS model, consisting of evaluation grades, can be simplified to a vector:
S i j = β i j 1 , β i j 2 , , β i j N .
The belief degree β i j k = n i j k n i j expresses the proportion of respondents selecting the category H k with n i j k being the number of such respondents and n i j the total number who answered for O i under C j . The sum of belief degrees satisfies k = 1 N β i j k 1 , where β i j H = 1 k = 1 N β i j k represents the degree of ignorance, capturing the proportion of respondents who selected the “Don’t know/No Answer/Refuses”.
The Belief Decision Matrix is defined as follows:
B M = [ S i j ] ,
where each entry S i j represents the belief structure values for the j -th criterion pertaining to the i -th object, i = 1 , , m ,   j = 1 , 2 , , n .
  • Step 2. Normalize the BS model and build the Normalized Belief Decision Matrix.
If the BS model contains missing responses (i.e., is incomplete), one common approach is to normalize it using the center of gravity method, defined by the following formula:
B S C S i j = H k , β i j k + β i j H N : k = 1 , , N ,
where H k is the evaluation grade,   β i j k is the belief degree for the k -th evaluation grade, β i j H = 1 k = 1 N β i j k is the degree of ignorance, representing the uncertainty or incomplete data in the BS model i = 1 , , m ,   j = 1 , 2 , , n ,   k = 1 , , N .
However, the center of gravity method distributes the level of ignorance in a very specific way—equally among all possible answers—according to a primitive fairness principle prescribing that any potential surplus value should be shared in equal proportions (see [48]). This approach completely ignores the information contained in the distribution of actual responses forming the current BS model (in our case, those provided by respondents using the four categorizing grades H1–H4). The presence of these responses suggests how the missing ones might be formed, which corresponds to the concept in multi-person decision theory of distributing surplus according to the proportion of potential (POP) rather than equally [49]. Therefore, in our approach, we apply the POP method to construct the normalized BS model, which will take the following form:
    S ¯ i j = β i j 1 + β i j 1 · β i j H k = 1 N β i j k , β i j 2 + β i j 2 · β i j H k = 1 N β i j k , , β i j N + β i j N · β i j H k = 1 N β i j k .
The Normalized Belief Decision Matrix is defined as follows:
N B M = [ S ¯ i j ] .
  • Step 3. Assign utility to evaluation grades and calculate the similarity between grades to form a Similarity Matrix.
Let U ( H k ) represent the utility assigned to the evaluation grade H k , where its value falls within the range 0 U H k 1 for k = 1 , , N . This function quantifies the satisfaction level associated with each evaluation grade. Generally, the highest grade is given utility of U H 1 = 1 , while the lowest is set to U H N = 0 , with utilities decreasing such that U H k + 1 < U H k for k = 1 , 2 , , N 1 .
The Similarity Matrix S ~ ( i = 1 , , m ,   j = 1 , 2 , , n ) is expressed as:
S ~ = [ s ~ i j ] ,
where s ~ i j denotes the similarity between grades H i and H j . It is calculated using the formula:
s ~ i j H i , H j = 1 U H i U H j .
In our protocol, we will start the analysis by defining a single reference utility function and then check the sensitivity of the results with respect to some changes in their forms, according to the procedure described in Step 8.
  • Step 4. Assign relative importance (weight) to each criterion using subjective or objective methods, ensuring the weights sum to one.
In socio-economic multi-criteria analysis, the assignment of weights is usually guided by factors like economic significance, data reliability, and availability. Nevertheless, no universally recognized method exists for this process [18,50,51]. If the data are processed according to some alternative approaches, for instance Exploratory Factor Analysis (EFA), then the weights can be obtained from the statistical description provided by the analytic engine, often in the form of factor loadings [52]. In the case of survey data, however, using such an approach is not recommended, as the limited rating scale (or poor scale granularity) does not permit the use of the typical mathematical operations required by this technique. What is more, surveys—and the Quality of Life in European Cities survey is no exception—usually do not provide an empirical basis, such as expert assessments or respondent prioritization, for differentiating the importance of the criteria. When there is no consensus on weighting methods, when decision-makers hold differing views, or when statistical and empirical data are insufficient to justify differentiated weights, equal weighting is often applied [50,51]. Thus, in our protocol, we recommend using equal weights, though differences in weights and their impact on the final results may be analyzed within step 8 as an additional element of the robustness analysis, similarly to the procedure described for various utility functions.
  • Step 5. Determine the best and worst possible outcomes for each criterion, representing the Positive Ideal Belief Solution (PIBS) and Negative Ideal Belief Solution (NIBS).
The Positive Ideal Belief Solution (PIBS) A + is defined as:
A + = { S 1 + , , S n + }
where S j + is the best belief structure model for the j-th criterion j   =   1 ,   2 ,   . . . ,   n .
The Negative Ideal Belief Solution (NIBS) A is defined as:
A = { S 1 , , S n }
where S j is the worst belief structure model for the j-th criterion j   =   1 ,   2 ,   . . . ,   n .
These solutions are obtained by choosing the belief structure models that represent the most and least favorable outcomes for each criterion, S j + gives the best possible result for the j-th criterion, while S j gives the worst.
The best belief structure model S j + is represented by the vector:
S j + = 1 , , 0 ,     j   =   1 ,   2 ,   . . . ,   n .
The worst belief structure model S j is represented by the vector:
S j = 0 , , 1 ,       j   =   1 ,   2 ,   . . . ,   n .
  • Step 6. Calculate Separation Measures for each alternative represented by its distance from the PIBS and NIBS using belief distance measures.
For each object O i determine the separation measures D i + and D i , which represent how far the object is from the Positive Ideal Belief Solution (PIBS) A + and the Negative Ideal Belief Solution (NIBS) A , respectively.
The separation measure from PIBS, D i + is given by:
D i + = D i + O i , A + = j = 1 n w j d B S S ¯ i j , S j + 2
The separation measure from NIBS, D i is given by:
D i = D i O i , A = j = 1 n w j d B S S ¯ i j , S j 2
In Equations (13) and (14), w j represents the weight of criterion C j ,   S ~ = s ~ i j is the similarity matrix that captures the differences among the evaluations; d B S   S ¯ i j , S j + ( d B S   S ¯ i j , S j ) measures the belief-based distance between the normalized belief structure model S ¯ i j (Formula (5)) and best (worst) belief structure model S j + ( S j ) (Formulas (11) and (12)), respectively:
d B S S ¯ i j , S j +   = 1 2 S ¯ i j S j + S ~ S ¯ i j S j + T 1 2
d B S S ¯ i j , S j = 1 2 S ¯ i j S j S ~ S ¯ i j S j T 1 2
  • Step 7. Calculate Relative Closeness R i of each alternative to the ideal solution.
The calculation of R i is expressed by the following formula:
R i = D i D i + D i +
where D i —represents the separation measure of the option O i from the NIBS, and D i + represents the separation measure of the alternative O i from the PIBS, with i = 1 , 2 , , m .
The higher the value, the closer the alternative is to the best outcome.
  • Step 8. Determine robust results using Monte-Carlo simulation.
In most applications of the B-TOPSIS method to survey analysis, certain model components cannot be unambiguously specified. Therefore, a reliable analysis of the problem requires examining how small changes in the adopted model parameters affect the resulting outcomes. To this end, Steps 3 through 7 should be repeated while incorporating different factors of variation or uncertainty in the B-TOPSIS parameters, simulating their changes across successive replications.
In our protocol, the utility function adopted in Step 3 assumes a typical linear form for evaluating satisfaction levels associated with the four labels of the linguistic rating scale (specifically, according to general recommendations [53], it assigns the following utilities: U H 1 = 1 ,     U H 2 = 0.7 ,     U H 3 = 0.4 ,     U H 4 = 0 ). This assumption is inherently arbitrary, and the stability of the results obtained under it should be subject to a standard sensitivity analysis. Accordingly, in the replication series, we repeat Steps 3 through 7, simulating small variations in the utility function, allowing the utilities for H 2 and H 3 to vary from the assumed reference levels.
It should be noted that similar robustness analyses may also be carried out for other parameters, such as item weights, to account for differences in their influence on the overall score. As a result, we obtain sets of scores for each alternative:
R i = R i r ,
where R i r is a B-TOPSIS rating determined for the alternative O i in r -th replication.
  • Step 9. Rank the objects using simulated data.
The B-TOPSIS routine proposed in [32] as an element of optional sensitivity analysis relies on the notion of first-order stochastic dominance (FSD) to compare the sets of performances R i of all objects. However, FSD is a highly restrictive criterion, and even a few outcomes reflecting extreme parameter values may prevent the verification of FSD between particular objects, leading to them being considered indifferent. Therefore, in this protocol, we propose to enrich the analysis of the results obtained in Step 7 by incorporating the concept of almost first-order stochastic dominance (AFSD) as defined by Levi [54] to determine the final rank order of objects using the distributions of B-TOPSIS values from R i .
AFSD may be verified for various levels of the violation area it identifies, starting from 0 (where it is equal to FSD) and ending with a level slightly below 0.5. The latter signifies that alternative A may be considered dominating B despite a minimal number or single observations showing A’s superiority over B. An AFSD level equal to 0 is extremely restrictive, while a level equal to 0.5 is extremely liberal. In rare situations, the cognitive capabilities of decision-makers may allow them to express directly the required ratio to be used in the AFSD analysis. Yet, in vast majority of situations, this level is going to be assigned by the analysts themselves. Given the lack of a declared ratio from the decision-maker, we suggest using a ratio that is halfway between the two extreme viewpoints, i.e., equal to 0.25. However, dynamic analyses may be conducted to observe how the outperformance changes among alternatives when the ratio increases and drops. This may be included as part of an extended robustness analysis.
With Levi’s notion of AFSD, it still remains to establish the procedure for defining the ranking from the dominance matrix. There exist various approaches for deriving a final ranking from relational data. One such approach is the concept introduced by French [55], who proposed the agreeing ordinal value function h . One of its variants can be employed to determine ranking positions based on the identified relations of dominance and indifference [56], i.e.,
h ( O i ) = j O { i } 1 ( O i O j O i O j ) ,
where
1 ( · ) = 1 i f   t h e   c o n d t i o n ( · )   h o l d s 0 o t h e r w i s e .
The values of the function h account for the number of situations in which a given alternative dominates others, or, if not, the number of situations in which it is not dominated by others. By ordering the values of the function h in descending order, we obtain the ranking of objects.
  • Step 10. Verify the relationships between ratings obtained from different surveys.
From the perspective of temporal comparisons of results (e.g., results from two successive surveys conducted in different years), it may also be useful to consider the outcomes at a lower level of evaluation granularity. Therefore, we propose a straightforward classification of objects into five predefined quality classes, specified by threshold intervals of rating scores: [0.800, 1]—Very High; [0.600, 0.800)—High; [0.400, 0.600)—Medium; [0.200, 0.400)—Low, and [0, 0.200)—Very Low.
The division into classes of equal ranges of rating values is a commonly used approach in the literature, with five or ten classes often adopted [57]. Alternatively, unequal intervals may be applied, based on the mean and standard deviation, as used in Hellwig’s approach [58] or other procedures [59]. In the present study, the division into five equal classes was adopted arbitrarily, yet with clear justification. Equal classes are intuitive and easy to interpret, allowing consistent comparison of results across years. The choice of five classes represents a compromise between measurement precision and simplicity of interpretation.
The verification of class membership is carried out by applying the same principle of testing whether AFSD—with an assumed minimum ration of violation area—holds between the distributions of object evaluations derived from the sets R i and the degenerate distributions concentrated at the threshold values defining the class boundaries., e.g., limiting profile describing the boundary between class Very High and High can be described by a degenerated distribution such as P(X = 0.8) = 1. Other limiting profiles are defined accordingly.
The methodological framework described above forms the basis for the empirical analysis presented in Section 3. The B-TOPSIS procedure, combined with Monte Carlo simulation, is applied to compute aggregated satisfaction scores for each city. These scores are then used to construct comparative rankings and to analyze variations across city size, region, and country (Section 3.2). The AFSD method is subsequently employed to assess changes in city classifications between 2019 and 2023 (Section 3.3). This structure ensures a clear link between the methodological steps and the results reported.

3. Results and Discussion

In this section, we applied the B-TOPSIS method to rank cities based on residents’ satisfaction with local administration, using data collected through the 2023 Quality of Life in European Cities survey [16].

3.1. B-TOPSIS Standard Routine for Alternatives Evaluation and Cross-Surveys Comparisons

We applied our extended B-TOPSIS protocol to analyze the survey data from the sixth edition of the Report on the Quality of Life in European Cities, which concerned five questions describing the assessment of satisfaction with local administration (see Section 2.1, Table 2). To construct the Belief Decision Matrix (BM) we used four distinct grades: H1 (very satisfied), H2 (rather satisfied), H3 (rather unsatisfied), and H4 (very unsatisfied). Additionally, we account for the degree of ignorance ( β H ), which reflects responses in the “Don’t know/No Answer/Refused” category, as shown in Formula (5).
For example, for Zürich and the criterion C1, the belief structure is as follows: BS = {(very satisfied, 0.2774), (rather satisfied, 0.4279), (rather unsatisfied, 0.1175), (very unsatisfied, 0.0284)} (see Table 3). The degree of ignorance is calculated as: β H = 1 − (0.2774 + 0.4279 + 0.1175 + 0.0284) = 0.1489. Therefore, using Formula (5) which implements POP idea, we redistribute this degree of ignorance proportionally to all other evaluations obtaining the belief structure for Zürich represented as the belief vector: S Z u r i c h , 1 = 0.3259 , 0.5027 , 0.1380 , 0.0334 . This process is repeated for all cities and results in building the Normalized Decision Matrix (NBM).
We defined the utility values for each satisfaction level as resembling the typical linear utility function:
U H 1 = 1 ,   U H 2 = 0.7 ,   U H 3 = 0.4 , U H 4 = 0 .
Using these utility values, the similarity matrix is generated according to Formulas (7) and (8). The resulting similarity matrix takes the following form:
S ~ = 1 0.7 0.4 0 0.7 1 0.7 0.3 0.4 0.7 1 0.6 0 0.3 0.6 1 .
We determined the Positive Ideal Belief Solution (PIBS) and the Negative Ideal Belief Solution (NIBS) according to Formulas (11) and (12). They were represented as: S j + = 1 , , 0 , and S j = 0 , , 1 for j = 1 , 2 , , n , respectively. Then, using recommended equal weights for items, we calculated the distances from both the PIBS and NIBS using belief distance measures, as defined by Formulas (13)–(16). These calculations indicate how close each city’s satisfaction levels are to the ideal solutions, providing a quantitative assessment of their overall performance (see columns 6–7, Table 4). The relative closeness of each city to the PIBS was then derived based on these two distance measures. To account for uncertainty of the shape of scoring function, the calculations were repeated in a series of 5000 replications, with each assuming feasible variations in the shapes of the initial utility functions of ±0.1 for the two intermediate grades H 2 and H 3 . Accordingly, U ( H 2 ) 0.6 ; 0.8 and U ( H 3 ) 0.3 ; 0.5 . In this way, we analyzed, with an adequate level of precision, the entire space of scoring functions that respondents might employ when interpreting linguistic scales. From the simulations, we obtain the sets R i . The average B-TOPSIS scores for each town, calculated from the corresponding R i together with their standard deviations, are presented in Table 4 (columns 8 and 9).
Finally, we determined the rank order of alternatives based on the distributions of their R i r values. To construct the ranking, we employed the notion AFSD(0) = FSD and apply a French’s aggregation procedure (see Table 4, column 10). This rank order is also illustrated using a Hasse diagram in Figure 7a (where town numbers are used to enhance readability). As shown, numerous ties occurred, which prevented us from clearly distinguishing some alternatives as superior to others. For example, alternatives 4 and 45 (Antalya and Luxembourg) both occupy the second position in the ranking, while alternatives 3, 31, and 76 (Ankara, Groningen, and Valletta) share the third position. Importantly, thanks to the simulation results, we recognize that small differences in average ratings—differences that could otherwise be used to establish a complete ordering of towns—are not meaningful within the framework of FSD. For instance, Antalya achieved an average score of 0.650, while Luxembourg scored 0.658, but the difference is insignificant from the perspective of stochastic dominance.
To address the aforementioned ties, we employed the concept of AFSD(0.25). We calculated the violation area ratio for each alternative (The data file containing the values of the dominance violation areas for cities in this survey, as well as those from the 2019 study (compared in Section 3.2), can be downloaded from the following link: https://docs.google.com/spreadsheets/d/1ui5bqV8OIPZLpLQzQ4341JkDGsUBj8MP/edit?usp=sharing&ouid=113670550800015859092&rtpof=true&sd=true (accessed on 20 October 2025)) and constructed the AFSD-based ranking (Table 4, column 11; Figure 7b).
The rankings presented in Table 4 provide a direct answer to research question Q1.

3.2. Cross-City and Cross-Country Comparisons (Q2–Q6)

Below, we will focus on answering the remaining research questions. To make interpretation more transparent, some comparisons between defined groups of cities or regions are performed using the average B-TOPSIS ratings obtained from the simulation. Yet, it must be noted that the entire analysis can also be performed using the FSD- and AFSD-based analysis, analogously to what we present in Section 3.1.
Now, we focus on Q2, exploring the key and scale differences in perceived satisfaction with public administration across European cities. From Table 4, it is evident that the level of satisfaction with local administration, as measured by the B-TOPSIS method, shows significant variation across cities. The B-TOPSIS scores ranged from 0.396 for Roma to 0.678 for Zürich, highlighting that no city performed consistently better or worse across all criteria.
Among the 82 cities surveyed, Zürich ranks at the bottom, occupying 82nd place for the percentage of respondents who selected “very unsatisfied” in relation to Efficiency (2.84%), Transparency (3.65%), Digital Accessibility (0.73%), and Trust (3.56%), and 81st for Costs (2.36%). However, Zürich placed 4th for the percentage of citizens who were “very satisfied” with Digital Accessibility (45.12%). Luxembourg closely follows, ranking second with a B-TOPSIS with 0.662 rating points. Among the surveyed cities, Luxembourg ranked 82nd in terms of the percentage of respondents who selected “very unsatisfied” regarding Costs (2.26%), 81st for Transparency (4.14%), and 80th for Efficiency (6.06%) and Digital Accessibility (2.74%). It secured 2nd place for the percentage of citizens “very satisfied” with Costs (36.7%) and 3rd for Efficiency (32.06%). Antalya ranks third B-TOPSIS of 0.659. Among the surveyed cities, Antalya had the highest proportion of citizens who were “very satisfied” with Efficiency (35.87%) and Transparency (47.22%). It secured 2nd place for the percentage of citizens “very satisfied” with Digital accessibility (48.21%).
At the bottom of the ranking is Roma with a B-TOPSIS score of 0.396. Roma placed 2nd for the highest percentage of respondents who were “very unsatisfied” with Efficiency (49.43%) and Transparency (33.24%). Roma also ranked 80th for the percentage of citizens “very satisfied” with Transparency (5.48%), Costs (4.26%), and Digital Accessibility (8.21%). Palermo ranks 81st, with a B-TOPSIS score of 0.402. Among the surveyed cities, it holds the first position for the highest percentage of respondents who were “rather unsatisfied” with Transparency (44.28%), Costs (44.63%), Digital accessibility (31.24%), and second position with Efficiency (37.87%) and Trust (43.21%).
We will address research question Q3 by exploring whether city size influences perceived satisfaction with public administration in European cities. This will be performed by comparing the descriptive statistics for B-TOPSIS across cities grouped by population size. The results, presented in Figure 8, confirm the variation in satisfaction with local administration based on city size.
Cities with populations between 250,000 and 500,000 (0.578) and those with more than 5,000,000 inhabitants (0.572) report, on average, slightly higher levels of satisfaction with local administration. The lowest satisfaction levels are observed in cities with populations between 500,000 and 1,000,000 (0.541) and between 1,000,000 and 5,000,000 inhabitants (0.546). Moreover, outliers can be observed for Skopje from class 2. Medium-sized cities may achieve higher satisfaction due to sufficient administrative resources and lower bureaucratic complexity, supporting responsive, citizen-centered services. Megacities face coordination and scale challenges, which may reduce perceived efficiency and trust [60].
We will address question Q4 by presenting the descriptive statistics for B-TOPSIS across different regions in Figure 9. This comparison highlights the differences between regions in terms of the level of perceived satisfaction with local administration as reported by citizens in those regions.
Among the regions, Northern MS reports the highest satisfaction from local administration (0.605), followed by Western MS (0.591) and Eastern MS (0.543). Southern MS (0.501) has the lowest satisfaction among MS regions. In contrast, cities outside the EU show a more distinct gap. EFTA countries report an average satisfaction level of 0.592, cities in the Western Balkans are 0.429, while countries like the United Kingdom and Türkiye report satisfaction levels of 0.586. The results show significant differences in citizens’ satisfaction with local public administration across European regions. For example, cities in Northern MS countries such as Stockholm and Helsinki demonstrate higher satisfaction levels compared to Southern European cities like Rome or Palermo. Moreover, outliers can be observed for cities from Eastern MS (Zagreb, Riga).
This pattern indicates that regions with higher citizen satisfaction tend to be more economically developed, highlighting a strong link between perceived administrative quality and regional wealth. These findings are consistent with the study [61], which demonstrates that stronger regional governance, captured through citizen perceptions of corruption, impartiality, and public service quality, significantly enhances GDP per capita growth.
These findings also align with previous research [43,44], emphasizing that regional variations stem from institutional structures and historical trajectories. Cultural, historical, and political contexts shape public perceptions: regions with decentralized governance or strong civic engagement often show higher trust and satisfaction. Disparities cannot be explained solely by current administrative performance; long-standing formal and informal frameworks are also crucial [62].
Now, we will address Q5, exploring whether there are differences in perceived satisfaction with public administration within countries across European cities.
The most significant intra-country disparities are observed in Turkey, where Antalya (3rd) and Diyarbakir (52nd) show a considerable gap in average B-TOPSIS ratings of 0.147. Italy follows closely, with a difference of 0.144 between Bologna (45th) and Rome (67th). Romania also exhibits notable variation, as Cluj-Napoca (18th) outperforms Bucharest (54th) by 0.096 points. In France, the contrast between Rennes (9th) and Marseille (48th) results in a difference of 0.086. Germany also demonstrates a substantial gap, with Munich (21st) and Berlin (51st) showing a score difference of 0.074, while in Switzerland, Zürich (1st) and Geneva (10th) differ by 0.058. In the United Kingdom, Cardiff (12th) ranks significantly higher than London (35th), with a gap of 0.057. Smaller yet still relevant differences are found in the Netherlands, where Groningen (4th) leads Rotterdam (20th) by 0.052 points, and in Spain, where Málaga (37th) and Oviedo (53rd) show a 0.041-point gap.
Such intra-country differences can often be explained by local governance, administrative capacity, resource allocation, and service efficiency [63,64]. Cities with strong institutions, effective management, and proactive policies generate higher citizen satisfaction, even within the same country. Conversely, cities with inefficiencies, corruption, or uneven service delivery rank lower. These results illustrate how subjective satisfaction reflects deeper structural and institutional differences across Europe.
Finally, the study examined how satisfaction with local administration has changed in the surveyed cities between 2019 and 2023 (Q6). This analysis provides insights into shifts in public perception over time and helps identify cities where governance performance has improved or declined. The fifth edition of the survey [15], using data from 2019, was the first to include a dedicated set of five questions on the quality of local public administration. As a result, a direct comparison is currently possible only between 2019 and 2023.
Table 5 below presents the results of the AFSD (0.25) analysis conducted on the simulated sets of B-TOPSIS scores R i 2019 and R i 2023 , obtained from participants’ responses to the surveys conducted in 2019 and 2023, respectively. These scores are compared to the limiting profiles of all five quality classes. Surprisingly, all cities were classified into only three of the five possible quality classes. Changes in class assignment are indicated in the table using green and blue colors, which represent improvements and deteriorations in evaluations between the 2019 and 2023 surveys, respectively.
Most of the cities did not change their position with respect to the limiting profiles of the quality classes, remaining assigned to the same class. Interestingly, only two cities improved their classification—moving from the medium to the high class—namely Ankara and Strasbourg. For example, Ankara improved its position, likely reflecting recent local administrative reforms, digitalization initiatives, and efforts to enhance efficiency and transparency [65]. Similarly, Strasbourg advanced in the rankings, possibly due to targeted investments in e-governance and public service modernization.
In contrast, ten cities experienced a decline in their classification. Notably, the majority of them are located in Western Europe and include British and Dutch cities such as London, Manchester, Glasgow, the Tyneside conurbation, Rotterdam, and Amsterdam. The declines in perceived satisfaction or UK cities may be related to the administrative and regulatory adjustments following Brexit [66] and disruptions caused by the COVID-19 pandemic [67]. Conversely, cities like Zürich, Luxembourg, and Copenhagen maintained consistently high satisfaction levels. Among the downgraded cities were also Miskolc (Hungary), Munich (Germany), Bordeaux (France), and Dublin (Ireland). Two Italian cities—Palermo and Rome—which had been classified as “poor” in terms of local administration quality in 2019, unfortunately, did not improve their position. The decline in the ratings of cities such as Miskolc, Munich, Bordeaux, and Dublin may result from a decrease in administrative efficiency and a limited pace of progress in the digitalization of public services, which aligns with broader observations on the uneven adoption of e-governance solutions across Europe [68]. In some cases, budgetary constraints and citizens’ growing expectations regarding service quality in the aftermath of the COVID-19 pandemic may also have contributed to the decline in ratings [69]. Palermo and Rome, in turn, have long struggled with structural problems such as bureaucracy, insufficient transparency of procedures, and low organizational efficiency—issues repeatedly highlighted in the literature on Italian local governance [70]. Enduring institutional deficits, combined with a slow pace of administrative reform and limited coordination between levels of government, continue to hinder improvements in their performance.
This classification enables us to capture significant structural changes that occurred between the two time points. Analyzing average B-TOPSIS values and rank positions from both surveys alone would not provide sufficiently informative results at the required level of granularity. When comparing average scores between the surveys, the data show that 60 cities experienced a decline in their mean B-TOPSIS rating, while 22 cities improved. Paradoxically, both Palermo and Rome improved their average scores by 0.0004 and 0.017 rating points, respectively, but did not change their rank positions or class assignment (the two last ones). Strasbourg and Ankara also improved their average scores by 0.069 and 0.009 points, respectively. In their case, the improvement was substantial enough to both advance them in the ranking (by 38 and 12 positions, respectively) and promote them to a higher quality class. Conversely, all ten cities that dropped in class also recorded a decline in their average B-TOPSIS score (as expected). However, two of them—Amsterdam and Bordeaux—paradoxically rose in the rankings, each by two positions: from 24th to 22nd, and from 21st to 19th, respectively. These latter two examples illustrate the potential interpretational issues that may arise when relying solely on average ratings and rankings, rather than on classification into a predefined number of quality classes. It is worth noting that, if a higher level of granularity is desired, the number of classes can be increased accordingly.
Overall, changes in citizen satisfaction are influenced by short-term interventions (digitalization, reforms) and long-term institutional quality. Findings highlight the importance of contextual factors in shaping perceptions. Top-performing cities (Zürich, Luxembourg) demonstrate best practices in efficiency, transparency, digital accessibility, and trust, providing benchmarks for other municipalities [43]. Linking satisfaction changes to institutional and socio-political contexts underscores the practical relevance of the results for evidence-based policy.

3.3. Links to Quality of Life

In our previous study [32], residents’ life satisfaction across 82 European cities was assessed using data from the Quality of Life in European Cities 2023 survey. The evaluation employed the classic B-TOPSIS method, integrating ten core dimensions of urban well-being, including public transport, healthcare, cultural facilities, green spaces, education, air quality, noise levels, and cleanliness.
A simple comparison between satisfaction with local administration (see Table 4) and satisfaction with living in the city (see [32], Table 5), both measured using the Belief Structure TOPSIS approach, revealed a strong positive correlation (r = 0.75). This indicates that higher satisfaction with local administration generally coincides with greater overall life satisfaction in the city. In the context of our research—survey results capturing subjective opinions within the social sciences—this is a notably high correlation value. The coefficient of determination for the linear regression model linking these two variables equals R 2 = 0.562 , which means that more than 56% of the variability in perceived quality of life can be explained by satisfaction ratings with local administration. An increase in one rating point in the evaluation of local administration is associated with a corresponding increase of 0.698 rating points in the evaluation of quality of life. Nevertheless, a considerable share of unexplained variance remains, suggesting the presence of other determinants that should be explored to develop a more comprehensive model of these dependencies.
Cities such as Zürich, Groningen, and Aalborg achieved top positions in both dimensions, while Palermo, Napoli, and Skopje ranked lowest, demonstrating overall consistency across the two measures. However, several exceptions illustrate the complexity of urban satisfaction (see Figure 10). Among cities with high administrative satisfaction (e.g., Copenhagen, Valletta, Groningen, Antalya, Luxembourg), life satisfaction varied considerably—from 0.530 in Valletta to 0.694 in Groningen—showing that effective governance does not automatically ensure a higher quality of life. Conversely, Aalborg and Białystok displayed only moderate satisfaction with local administration but relatively high overall life satisfaction, suggesting that other factors may significantly influence residents’ well-being. Similarly, cities with low administrative satisfaction (e.g., Athens, Heraklion, Podgorica, Skopje, Roma, Palermo) exhibited diverse life satisfaction levels, ranging from 0.438 in Skopje to 0.536 in Podgorica.
When the analysis was reversed, cities with high life satisfaction (e.g., Zürich, Genève, Aalborg, Copenhagen, Graz, Cardiff) showed substantial variation in administrative ratings, from 0.615 in Cardiff to 0.678 in Zürich. Likewise, cities with lower life satisfaction (e.g., Palermo, Napoli, Skopje, Heraklion, Ankara) presented differing perceptions of governance.
Overall, these findings indicate that while good governance contributes to higher urban well-being, it is only one of several interrelated factors [71,72,73]. This conclusion aligns with earlier evidence from Więziak-Białowolska [71], who demonstrated that institutional aspects of city life, like trustworthiness and efficiency of public administration are determinants of life satisfaction in European cities. More recent studies have expanded on this relationship, showing that the effects of governance are mediated and context-dependent. For instance, Nicolás-Martínez et al. [72] found a positive correlation between perception of urban quality (PUQ) and social trust and security (STS) on the life satisfaction (LS) of European citizens, with STS being a mediator between PUQ and LS, while Pazos-García et al. [73] demonstrated that the city’s performance, as reflected by its employment ratio, is largely determined by good governance resources such as. e-government, transparency, and reputation and by the overall quality of life (QoL).

4. Conclusions and Future Research

This study presented a novel, enhanced B-TOPSIS–based protocol for the evaluation of survey data and demonstrates its application to analyze the subjectively assessed quality of local public administration in European cities. This approach allowed us to focus on residents’ perceptions using data from the 2019 and 2023 Quality of Life in European Cities survey, enabling a comparative analysis. To handle ordinal survey data and response uncertainties effectively, we adapted the classic B-TOPSIS method, redesigning it to create a protocol that produces robust results in the presence of missing or uncertain data and allows for cross-survey comparisons. We refined the B-TOPSIS procedure by specifying concrete solutions for each step, tailored to survey data. In particular, we introduced a proportional adjustment for missing responses based on the current distribution of answers and provided justification for the use of fixed weights for survey questions. The protocol also incorporates steps to ensure robust results when the evaluation function for assigning utility to responses is uncertain. This was achieved through Monte Carlo simulations with replications, exploring different plausible forms of the utility function within decision-maker–defined bounds. Next, we implemented the AFSD approach to analyze the distributions of cities’ ratings, enabling cross-city comparisons and ranking construction. We further introduced a clustering mechanism to group cities into fixed clusters, improving control over performance changes across surveys. This methodological framework directly addresses the research questions MQ1–MQ3 outlined in the Introduction, which arose from earlier experiences with classic B-TOPSIS in quality-of-life analyses and its observed limitations [33].
From the above discussion, the methodological advancements introduced by our enhanced protocol become evident when compared to existing solutions. Relative to the original B-TOPSIS [32], the proposed framework introduces additional, explicit mechanisms for (1) addressing uncertainty associated with the scoring functions used to evaluate survey responses, and (2) performing longitudinal analyses of temporal changes. In contrast to other approaches applied to survey-based QoL data, our method allows the use of multi-item survey information without the substantial information loss that often results from the preprocessing procedures required by alternative techniques. Unlike previous analyses [12,27,31], which typically concentrated on individual survey items or simplified data by collapsing response categories, the enhanced framework integrates multiple dimensions into a single composite indicator while preserving ordinal-scale information and explicitly accounting for uncertainty. Moreover, unlike multi-criteria decision-making approaches based on intuitionistic fuzzy sets [27,31] it does not require selecting among multiple distance measures or constructing a reference pattern, which can be challenging for researchers. Consequently, our approach enables a more holistic evaluation of QoL performance, facilitates both temporal and cross-city comparisons, and mitigates the information loss inherent in traditional methodologies.
Applying this enhanced B-TOPSIS protocol to the survey data allowed us to answer empirical questions Q1–Q6. The results indicate significant disparities in governance quality across Europe. Zurich, Luxembourg, and Antalya emerged as the cities with the highest levels of resident satisfaction, whereas Roma and Palermo ranked lowest. City size also appears to influence public perception—residents in mid-sized cities (250,000–500,000) and large metropolitan areas (over 5 million) report higher satisfaction levels than those in other urban categories. Additionally, the study highlights a regional divide, with EU and EFTA cities generally exhibiting higher governance satisfaction compared to Western Balkan cities, where administrative challenges remain more pronounced.
The B-TOPSIS method demonstrated how a more detailed comparison of cities can be conducted using the full dataset of simulated data, rather than relying on a single utility function, which may flatten the complex picture of nuanced between-city differences resulting from the assumed level of uncertainty in defining various utility functions. This was most evident when analyzing changes in perceived satisfaction with local administration between 2019 and 2023. The analysis revealed significant improvements in cities such as Ankara and Strasbourg, while substantial declines were observed in UK cities like Manchester, Glasgow, and London, highlighting broader regional trends in urban governance across Europe. However, relying solely on net average B-TOPSIS rankings can be misleading; for example, Amsterdam and Bordeaux improved their rankings but did not reach their previous quality class. These observations underscore the importance of applying our enhanced protocol in studies addressing temporal changes.
The practical significance of these findings lies in their direct applicability for urban policymakers and local authorities. By providing a multidimensional and uncertainty-aware assessment of citizens’ satisfaction, the extended B-TOPSIS framework supports targeted improvements in administrative services, evidence-based decision-making, and monitoring of changes over time. It highlights cities that consistently rank highest in satisfaction, offering benchmarks for effective administrative models, citizen-centered service design, and participatory governance [74]. These insights can inform strategies to enhance efficiency, transparency, digital accessibility, cost fairness, and trust, directly supporting sustainable urban development and SDG 11. Moreover, the framework contributes to the broader discussion on measuring and improving urban governance quality through data-driven tools and can be integrated into urban observatories or sustainability dashboards.
Being a technical solution, our enhanced analytical protocol may require explicit recommendations for practitioners regarding its setup and application in producing final results. The complete procedure, described step-by-step in Section 2.2, provides detailed justification for the use of proportional imputation of missing responses and equal weighting of survey items. However, certain degrees of tuning flexibility remain in steps 8–10.
Step 8 requires the definition of an evaluation function. If the decision-maker possesses an approximate knowledge of this function, we recommend using it explicitly while allowing for small feasible deviations in the assessments of verbal labels—approximately ±10%. If the DM’s uncertainty about the specific form of the function is greater, these deviations should be proportionally increased. In cases where no prior knowledge about the function’s shape is available, a simple linear form can be applied as a default (which was used in our analyses).
Particular attention should be paid to Step 9, where AFSD is employed to compare results. A range of dominance degrees can be used within AFSD to confirm superiority between alternatives. We recommend using a midpoint between two extremes 0 and 0.5, i.e., 0.25, as it provides a reasonable balance between minimizing the number of incomparabilities (ties in the ranking) and maintaining sufficient dominance strength (i.e., A has an outperforming area at least three times greater than B–¾ vs. ¼). However, if the DM is more conservative, a stricter dominance threshold—such as 10%—may be adopted, implying A exceeds B ten times in terms of the outperforming areas.
Finally, Step 10 concerns the formation of issue clusters for longitudinal comparisons. We strongly recommend constructing these clusters using external fixed reference points, since limiting profiles derived from internal value distributions (e.g., percentiles) are sample-dependent and therefore unsuitable for cross-survey or intertemporal comparisons. The number of clusters should correspond to the desired precision of evaluation. In our study, we used five clusters to parallel the conventional five-point linguistic evaluation scale (from “very good” to “very bad”). Nonetheless, if higher granularity is desired, the number of clusters may be extended to seven, nine, or any configuration consistent with the decision-maker’s expected level of evaluative precision.
While this study presents valuable insights, several limitations should be acknowledged. This research focuses on governance-related aspects of quality of life but does not incorporate broader economic, social, and environmental factors that may shape public perceptions. Future studies should integrate additional indicators to explore the complex interactions between governance quality and urban well-being. Some advanced structural models of relationships among various elements of quality of life described by a subset of survey questions may allow the discovery of deeper descriptive conclusions.
Additionally, future research could explore different weighting schemes to assess the relative importance of various governance criteria. Comparing subjective perceptions (based on survey data) with objective indicators (such as administrative efficiency, public service performance, or digital governance metrics) could provide a more comprehensive evaluation of local administration quality. Examining alternative multi-criteria decision-making (MCDM) methods could also help validate the robustness of the B-TOPSIS approach in assessing urban governance effectiveness.

Author Contributions

Conceptualization, E.R., E.M. and T.W.; methodology, E.R., E.M. and T.W.; software, T.W.; validation, E.R., E.M. and T.W.; formal analysis, E.R., E.M. and T.W.; investigation, E.R., E.M. and T.W.; resources, E.R.; data curation, E.R., E.M. and T.W.; writing—original draft preparation, E.R. and T.W.; writing—review and editing, E.R., E.M. and T.W.; visualization, T.W.; supervision, E.R.; project administration, E.R.; funding acquisition, E.R., E.M. and T.W. All authors have read and agreed to the published version of the manuscript.

Funding

The contribution was supported by the grant WZ/WI-IIT/2/25 from Bialystok University of Technology and funded by the Ministry of Science and Higher Education, and by the ‘Regional Initiative of Excellence’ program financed by the Polish Ministry of Science and Higher Education.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data supporting reported results can be found [16]. Inforegio-Quality of Life in European Cities Available online: https://ec.europa.eu/regional_policy/information-sources/maps/quality-of-life_en (accessed on 19 October 2024).

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Hajian, M.; Kashani, S.J. Evolution of the Concept of Sustainability. From Brundtland Report to Sustainable Development Goals. Sustain. Resour. Manag. 2021, 1–24. [Google Scholar] [CrossRef]
  2. Purvis, B.; Mao, Y.; Robinson, D. Three Pillars of Sustainability: In Search of Conceptual Origins. Sustain. Sci. 2019, 14, 681–695. [Google Scholar] [CrossRef]
  3. Moser, G. Quality of Life and Sustainability: Toward Person–Environment Congruity. J. Environ. Psychol. 2009, 29, 351–357. [Google Scholar] [CrossRef]
  4. Gazzola, P.; Querci, E. The Connection between the Quality of Life and Sustainable Ecological Development. Eur. Sci. J. 2017, 13, 361–375. [Google Scholar]
  5. Berishvili, N. Agenda 2030 and the EU on Sustainable Cities and Communities. In Implementing Sustainable Development Goals in Europe; Edward Elgar Publishing: Cheltenham, UK, 2020; pp. 150–161. ISBN 978-1-78990-997-5. [Google Scholar]
  6. Klopp, J.M.; Petretta, D.L. The Urban Sustainable Development Goal: Indicators, Complexity and the Politics of Measuring Cities. Cities 2017, 63, 92–97. [Google Scholar] [CrossRef]
  7. Ronael, M.; Oruç Ertekin, G.D. Public Spaces for Future Cities: Mapping Urban Resilience Dimensions in Place-Based Solutions. Sustain. Cities Soc. 2025, 133, 106870. [Google Scholar] [CrossRef]
  8. SDG 11-Sustainable Cities and Communities. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=SDG_11_-_Sustainable_cities_and_communities (accessed on 22 February 2025).
  9. Mishra, P.; Singh, G. Sustainable Smart Cities: Enabling Technologies, Energy Trends and Potential Applications; Springer International Publishing: Cham, Switzerland, 2023; ISBN 978-3-031-33353-8. [Google Scholar]
  10. Trindade, E.P.; Hinnig, M.P.F.; da Costa, E.M.; Marques, J.S.; Bastos, R.C.; Yigitcanlar, T. Sustainable Development of Smart Cities: A Systematic Review of the Literature. J. Open Innov. Technol. Mark. Complex. 2017, 3, 1–14. [Google Scholar] [CrossRef]
  11. Ismagiloiva, E.; Hughes, L.; Rana, N.; Dwivedi, Y. Role of Smart Cities in Creating Sustainable Cities and Communities: A Systematic Literature Review. In ICT Unbounded, Social Impact of Bright ICT Adoption; Dwivedi, Y., Ayaburi, E., Boateng, R., Effah, J., Eds.; IFIP Advances in Information and Communication Technology; Springer International Publishing: Cham, Switzerland, 2019; Volume 558, pp. 311–324. ISBN 978-3-030-20670-3. [Google Scholar]
  12. De Dominicis, L.; Berlingieri, F.; d’Hombres, B.; Gentile, C.; Mauri, C.; Stepanova, E.; Pontarollo, N.; European Commission (Eds.) Report on the Quality of Life in European Cities, 2023; Publications Office of the European Union: Luxembourg, 2023; ISBN 978-92-68-07783-2. [Google Scholar]
  13. Acemoglu, D.; Johnson, S.; Robinson, J.A. Institutions as a Fundamental Cause of Long-Run Growth. In Handbook of Economic Growth; Aghion, P., Durlauf, S.N., Eds.; Elsevier: Amsterdam, The Netherlands, 2005; Volume 1, pp. 385–472. [Google Scholar]
  14. Rodríguez-Pose, A. Do Institutions Matter for Regional Development? Reg. Stud. 2013, 47, 1034–1047. [Google Scholar] [CrossRef]
  15. Bolsi, P.; de Dominics, L.; Castelli, C.; d’Hombres, B.; Montalt, V.; Pontarollo, N. Report on the Quality of Life in European Cities, 2020; European Union: Brussels, Belgium, 2020. [Google Scholar]
  16. Inforegio—Quality of Life in European Cities. Available online: https://ec.europa.eu/regional_policy/information-sources/maps/quality-of-life_en (accessed on 19 October 2024).
  17. El Gibari, S.; Gómez, T.; Ruiz, F. Building Composite Indicators Using Multicriteria Methods: A Review. J. Bus. Econ. 2019, 89, 1–24. [Google Scholar] [CrossRef]
  18. Greco, S.; Ishizaka, A.; Tasiou, M.; Torrisi, G. On the Methodological Framework of Composite Indices: A Review of the Issues of Weighting, Aggregation, and Robustness. Soc. Indic. Res. 2019, 141, 61–94. [Google Scholar] [CrossRef]
  19. Lindén, D.; Cinelli, M.; Spada, M.; Becker, W.; Gasser, P.; Burgherr, P. A Framework Based on Statistical Analysis and Stakeholders’ Preferences to Inform Weighting in Composite Indicators. Environ. Model. Softw. 2021, 145, 105208. [Google Scholar] [CrossRef]
  20. Bartniczak, B.; Raszkowski, A. Implementation of the Sustainable Cities and Communities Sustainable Development Goal (SDG) in the European Union. Sustainability 2022, 14, 16808. [Google Scholar] [CrossRef]
  21. Wątróbski, J.; Bączkiewicz, A.; Ziemba, E.; Sałabun, W. Sustainable Cities and Communities Assessment Using the DARIA-TOPSIS Method. Sustain. Cities Soc. 2022, 83, 103926. [Google Scholar] [CrossRef]
  22. Wątróbski, J.; Bączkiewicz, A.; Ziemba, E.; Sałabun, W. Temporal VIKOR—A New MCDA Method Supporting Sustainability Assessment. In Advances in Information Systems Development; Silaghi, G.C., Buchmann, R.A., Niculescu, V., Czibula, G., Barry, C., Lang, M., Linger, H., Schneider, C., Eds.; Lecture Notes in Information Systems and Organisation; Springer International Publishing: Cham, Switzerland, 2023; Volume 63, pp. 187–206. ISBN 978-3-031-32417-8. [Google Scholar]
  23. Górecka, D.; Roszkowska, E. Enhancing Spatial Analysis through Reference Multi-Criteria Methods: A Study Evaluating EU Countries in Terms of Sustainable Cities and Communities. Netw. Spat. Econ. 2025, 25, 1–42. [Google Scholar] [CrossRef]
  24. Roszkowska, E.; Filipowicz-Chomko, M.; Górecka, D.; Majewska, E. Sustainable Cities and Communities in EU Member States: A Multi-Criteria Analysis. Sustainability 2025, 17, 22. [Google Scholar] [CrossRef]
  25. Pike, A.; Rodríguez-Pose, A.; Tomaney, J. What Kind of Local and Regional Development and for Whom? Reg. Stud. 2007, 41, 1253–1269. [Google Scholar] [CrossRef]
  26. Jefmański, B. Intuitionistic Fuzzy Synthetic Measure for Ordinal Data. In Proceedings of the Conference of the Section on Classification and Data Analysis of the Polish Statistical Association, Szczecin, Poland, 18–20 September 2019; pp. 53–72. [Google Scholar]
  27. Jefmański, B.; Roszkowska, E.; Kusterka-Jefmańska, M. Intuitionistic Fuzzy Synthetic Measure on the Basis of Survey Responses and Aggregated Ordinal Data. Entropy 2021, 23, 1636. [Google Scholar] [CrossRef]
  28. Kusterka-Jefmańska, M.; Roszkowska, E.; Jefmański, B. The Intuitionistic Fuzzy Synthetic Measure in a Dynamic Analysis of the Subjective Quality of Life of Citizens of European Cities. Econ. Environ. 2024, 88, 708. [Google Scholar] [CrossRef]
  29. Kusterka-Jefmańska, M.; Jefmański, B.; Roszkowska, E. Application of the Intuitionistic Fuzzy Synthetic Measure in the Subjective Quality of Life Measurement Based on Survey Data. In Modern Classification and Data Analysis, Proceedings of the SKAD 2021, Poland, Online, 8–10 September 2021; Jajuga, K., Dehnel, G., Walesiak, M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 243–261. [Google Scholar]
  30. Roszkowska, E.; Filipowicz-Chomko, M.; Kusterka-Jefmańska, M.; Jefmański, B. The Impact of the Intuitionistic Fuzzy Entropy-Based Weights on the Results of Subjective Quality of Life Measurement Using Intuitionistic Fuzzy Synthetic Measure. Entropy 2023, 25, 961. [Google Scholar] [CrossRef] [PubMed]
  31. Roszkowska, E.; Kusterka-Jefmańska, M.; Jefmański, B. Intuitionistic Fuzzy TOPSIS as a Method for Assessing Socioeconomic Phenomena on the Basis of Survey Data. Entropy 2021, 23, 563. [Google Scholar] [CrossRef] [PubMed]
  32. Roszkowska, E.; Wachowicz, T. Smart Cities and Resident Well-Being: Using the BTOPSIS Method to Assess Citizen Life Satisfaction in European Cities. Appl. Sci. 2024, 14, 11051. [Google Scholar] [CrossRef]
  33. Nowicka, K. Smart City Logistics on Cloud Computing Model. Procedia Soc. Behav. Sci. 2014, 151, 266–281. [Google Scholar] [CrossRef]
  34. Ziller, C.; Andreß, H.-J. Quality of Local Government and Social Trust in European Cities. Urban Stud. 2022, 59, 1909–1925. [Google Scholar] [CrossRef]
  35. Kim, S.; Lee, J. E-Participation, Transparency, and Trust in Local Government. Public Adm. Rev. 2012, 72, 819–828. [Google Scholar] [CrossRef]
  36. Sol, D.A. del The Institutional, Economic and Social Determinants of Local Government Transparency. J. Econ. Policy Reform. 2013, 16, 90–107. [Google Scholar] [CrossRef]
  37. Boyle, R. Using Fees and Charges-Cost Recovery in Local Government. Local Gov. Res. Ser. Rep. 2012, 3, 6–32. [Google Scholar]
  38. Rias-Aceituno, J.V.; García-Aánchez, I.M.; Rodríguez-Domínguez, L. Electronic Administration Styles and Their Determinants. Evidence from Spanish Local Governments. Transylv. Rev. Adm. Sci. 2014, 10, 90–108. [Google Scholar]
  39. Nowak, P.A.; Czekaj, M.; Salachna, T. Digital Accessibility of Social Welfare Institutions. Rozpr. Społeczne Soc. Diss. 2024, 18, 299–314. [Google Scholar] [CrossRef]
  40. Habibov, N.; Auchynnikava, A.; Luo, R.; Fan, L. The Importance of Institutional Trust in Explaining Life-Satisfaction: Lessons From the 2008 Global Financial Crisis. Probl. Econ. Transit. 2022, 63, 401–443. [Google Scholar] [CrossRef]
  41. de Vries, M.S.; Sobis, I. Trust in the Local Administration: A Comparative Study between Capitals and Non-Capital Cities in Europe. NISPAcee J. Public. Adm. Policy 2018, 11, 209–228. [Google Scholar] [CrossRef]
  42. Gründler, K.; Potrafke, N. Corruption and Economic Growth: New Empirical Evidence. Eur. J. Political Econ. 2019, 60, 101810. [Google Scholar] [CrossRef]
  43. Charron, N.; Dijkstra, L.; Lapuente, V. Regional Governance Matters: Quality of Government within European Union Member States. Reg. Stud. 2023, 48, 68–90. [Google Scholar] [CrossRef]
  44. Charron, N.; Lapuente, V.; Annoni, P. Measuring Quality of Government in EU Regions across Space and Time. Pap. Reg. Sci. 2019, 98, 1925–1954. [Google Scholar] [CrossRef]
  45. Dinesen, P.T.; Sønderskov, K.M. Quality of Government and Social Trust. In The Oxford Handbook of the Quality of Government; Bågenholm, A., Bauhr, M., Grimes, M., Rothstein, B., Eds.; Oxford University Press: Oxford, UK, 2021; ISBN 978-0-19-885821-8. [Google Scholar]
  46. García-Sánchez, I.-M.; Rodríguez-Domínguez, L.; Gallego-Álvarez, I. The Relationship between Political Factors and the Development of E–Participatory Government. Inf. Soc. 2011, 27, 233–251. [Google Scholar] [CrossRef]
  47. Abdelzadeh, A.; Lundberg, E. The Longitudinal Link between Institutional and Community Trust in a Local Context—Findings from a Swedish Panel Study. Local Gov. Stud. 2024, 51, 747–767. [Google Scholar] [CrossRef]
  48. Shapley, L.S.; Shubik, M. Pure Competition, Coalitional Power, and Fair Division. Int. Econ. Rev. 1969, 10, 337–362. [Google Scholar] [CrossRef]
  49. Raiffa, H.; Richardson, J.; Metcalfe, D. Negotiation Analysis: The Science and Art of Collaborative Decision Making; Harvard University Press: Cambridge, MA, USA, 2002; ISBN 978-0-674-00890-8. [Google Scholar]
  50. Gan, X.; Fernandez, I.C.; Guo, J.; Wilson, M.; Zhao, Y.; Zhou, B.; Wu, J. When to Use What: Methods for Weighting and Aggregating Sustainability Indicators. Ecol. Indic. 2017, 81, 491–502. [Google Scholar] [CrossRef]
  51. Decancq, K.; Lugo, M.A. Weights in Multidimensional Indices of Wellbeing: An Overview: Econometric Reviews. Econom. Rev. 2013, 32, 7–34. [Google Scholar] [CrossRef]
  52. Howard, M.C. A Review of Exploratory Factor Analysis Decisions and Overview of Current Practices: What We Are Doing and How Can We Improve? Int. J. Hum. Comput. Interact. 2016, 32, 51–62. [Google Scholar] [CrossRef]
  53. Jiang, J.; Chen, Y.-W.; Tang, D.-W.; Chen, Y.-W. TOPSIS with Belief Structure for Group Belief Multiple Criteria Decision Making. Int. J. Autom. Comput. 2010, 7, 359–364. [Google Scholar] [CrossRef]
  54. Levy, M. Almost Stochastic Dominance and Efficient Investment Sets. Am. J. Oper. Res. 2012, 2, 313–321. [Google Scholar] [CrossRef]
  55. French, S. Decision Theory: An Introduction to the Mathematics of Rationality; Halsted Press: Sydney, Australia, 1986. [Google Scholar]
  56. Michalska, E.; Dudzińska-Baryła, R. Comparison of the Valuations of Alternatives Based on Cumulative Prospect Theory and Almost Stochastic Dominance. Oper. Res. Decis. 2012, 22, 23–36. [Google Scholar]
  57. Łuczak, A.; Kalinowski, S. A Fuzzy Hybrid MCDM Approach to the Evaluation of Subjective Household Poverty. Stat. Transit. New Ser. 2025, 26, 69–91. [Google Scholar] [CrossRef]
  58. Roszkowska, E. A Comprehensive Exploration of Hellwig’s Taxonomic Measure of Development and Its Modifications—A Systematic Review of Algorithms and Applications. Appl. Sci. 2024, 14, 10029. [Google Scholar] [CrossRef]
  59. Łuczak, A.; Just, M. A Complex MCDM Procedure for the Assessment of Economic Development of Units at Different Government Levels. Mathematics 2020, 8, 1067. [Google Scholar] [CrossRef]
  60. van Raan, A.F.J. Urban Scaling in Denmark, Germany, and the Netherlands: Relation with Governance Structures. arXiv 2019, arXiv:1903.03004. [Google Scholar] [CrossRef]
  61. Filip, D.; Setzer, R. The Impact of Regional Institutional Quality on Economic Growth and Resilience in the EU; The European Central Bank: Frankfurt am Main, Germany, 2025. [Google Scholar]
  62. Ketterer, T.D.; Rodríguez-Pose, A. Institutions vs. ‘first-nature’ Geography: What Drives Economic Growth in Europe’s Regions? Pap. Reg. Sci. 2018, 97, S25–S63. [Google Scholar] [CrossRef]
  63. Charron, N.; Lapuente, V. Why Do Some Regions in Europe Have a Higher Quality of Government? J. Politics 2013, 75, 567–582. [Google Scholar] [CrossRef]
  64. Rodríguez-Pose, A.; Di Cataldo, M. Quality of Government and Innovative Performance in the Regions of Europe. J. Econ. Geogr. 2015, 15, 673–706. [Google Scholar] [CrossRef]
  65. Yıldırım, S.; Bostancı, S.H. The Efficiency of E-Government Portal Management from a Citizen Perspective: Evidences from Turkey. World J. Sci. Technol. Sustain. Dev. 2021, 18, 259–273. [Google Scholar] [CrossRef]
  66. Thissen, M.; Van Oort, F.; McCann, P.; Ortega-Argilés, R.; Husby, T. The Implications of Brexit for UK and EU Regional Competitiveness. Econ. Geogr. 2020, 96, 397–421. [Google Scholar] [CrossRef]
  67. Hirsch, B.; Schäfer, F.-S.; Aristovnik, A.; Kovač, P.; Ravšelj, D. The Impact of Digitalized Communication on the Effectiveness of Local Administrative Authorities — Findings from Central European Countries in the COVID-19 Crisis. J. Bus. Econ. 2023, 93, 173–192. [Google Scholar] [CrossRef] [PubMed]
  68. European Commission. Digital Decade—Policy Programme. The Yearly Editions of the Digital Decade Report. 2025. Available online: https://digital-strategy.ec.europa.eu/en/library/digital-decade-2025-country-reports (accessed on 15 October 2025).
  69. Kuhlmann, S.; Wollmann, H.; Reiter, R. Introduction to Comparative Public Administration: Administrative Systems and Reforms in Europe, 3rd ed.; Edward Elgar Publishing: Cheltenham, UK, 2025; ISBN 978-1-0353-0247-5. [Google Scholar]
  70. Bisogno, M.; Cuadrado-Ballesteros, B.; Abate, F. The Role of Institutional and Operational Factors in the Digitalization of Large Local Governments: Insights from Italy. Int. J. Public Sect. Manag. 2024, 38, 238–258. [Google Scholar] [CrossRef]
  71. Węziak-Białowolska, D. Quality of Life in Cities–Empirical Evidence in Comparative European Perspective. Cities 2016, 58, 87–96. [Google Scholar] [CrossRef]
  72. Nicolás-Martínez, C.; Pérez-Cárceles, M.C.; Riquelme-Perea, P.J.; Verde-Martín, C.M. Are Cities Decisive for Life Satisfaction? A Structural Equation Model for the European Population. Soc. Indic. Res. 2024, 174, 1025–1051. [Google Scholar] [CrossRef]
  73. Pazos-García, M.J.; López-López, V.; Vila-Vázquez, G.; González, X.P. Governance, Quality of Life and City Performance: A Study Based on Artificial Intelligence. J. Comput. Soc. Sc. 2025, 8, 82. [Google Scholar] [CrossRef]
  74. Meuleman, L. Metagovernance for Sustainability: A Framework for Implementing the Sustainable Development Goals; Routledge: London, UK, 2018; ISBN 978-1-351-25060-3. [Google Scholar]
Figure 1. A part of the Quality of Life in European Cities aggregated survey assignment sheet [16] for Q1.
Figure 1. A part of the Quality of Life in European Cities aggregated survey assignment sheet [16] for Q1.
Sustainability 17 10106 g001
Figure 2. Boxplots illustrating the distribution of responses for Q1: Efficiency: Source: [16].
Figure 2. Boxplots illustrating the distribution of responses for Q1: Efficiency: Source: [16].
Sustainability 17 10106 g002
Figure 3. Boxplots illustrating the distribution of responses for Q2: Transparency: Source: [16].
Figure 3. Boxplots illustrating the distribution of responses for Q2: Transparency: Source: [16].
Sustainability 17 10106 g003
Figure 4. Boxplots illustrating the distribution of responses for Q3: Cost: Source: [16].
Figure 4. Boxplots illustrating the distribution of responses for Q3: Cost: Source: [16].
Sustainability 17 10106 g004
Figure 5. Boxplots illustrating the distribution of responses for Q4: Digital accessibility: Source: [16].
Figure 5. Boxplots illustrating the distribution of responses for Q4: Digital accessibility: Source: [16].
Sustainability 17 10106 g005
Figure 6. Boxplots illustrating the distribution of responses for Q5: Trust: Source: [16].
Figure 6. Boxplots illustrating the distribution of responses for Q5: Trust: Source: [16].
Sustainability 17 10106 g006
Figure 7. Hasse diagrams for preference relations obtained from FSD (a) and AFSD(0.25) (b) for European towns (numbered).
Figure 7. Hasse diagrams for preference relations obtained from FSD (a) and AFSD(0.25) (b) for European towns (numbered).
Sustainability 17 10106 g007
Figure 8. Boxplots illustrating the distribution of B-TOPSIS average scores categorized by city population.
Figure 8. Boxplots illustrating the distribution of B-TOPSIS average scores categorized by city population.
Sustainability 17 10106 g008
Figure 9. Boxplots illustrating the distribution of B-TOPSIS average scores categorized by groups of countries.
Figure 9. Boxplots illustrating the distribution of B-TOPSIS average scores categorized by groups of countries.
Sustainability 17 10106 g009
Figure 10. Relationship between life satisfaction and satisfaction from local administration.
Figure 10. Relationship between life satisfaction and satisfaction from local administration.
Sustainability 17 10106 g010
Table 1. Respondent characteristics by gender, age, and education level.
Table 1. Respondent characteristics by gender, age, and education level.
Characteristics of RespondentsCategoryPercentage
GenderMale47.12%
Female52.88%
Age15–195.04%
20–2410.54%
25–3417.55%
35–4416.63%
45–5414.64%
55–6413.94%
65–7413.69%
75+7.96%
Don’t know/No Answer/Refuses0.00%
Level of EducationLess than Primary education0.30%
Primary education1.89%
Lower secondary education15.22%
Upper secondary education37.97%
Post-secondary non-tertiary education8.18%
Short-cycle tertiary education8.92%
Bachelor or equivalent14.75%
Master or equivalent10.53%
Doctoral or equivalent1.65%
Don’t know/No Answer/Refuses0.60%
Source: [16].
Table 2. Survey questions (Q1–Q5) and corresponding evaluation criteria (C1–C5).
Table 2. Survey questions (Q1–Q5) and corresponding evaluation criteria (C1–C5).
QuestionCriterionReferences
Q1: I am satisfied with the amount of time it takes to get a request solved by my local public administration.C1: Efficiency–the time required to resolve administrative requests.[33,34]
Q2: The procedures used by my local public administration are straightforward and easy to understand.C2: Transparency–clarity and simplicity of administrative procedures.[35,36,37]
Q3: The fees charged by my local public administration are reasonable.C3: Costs–fairness and reasonableness of fees charged by local authorities.[37]
Q4: Information and services of my local public administration can be easily accessed online.C4: Digital accessibility–ease of accessing public services online.[38,39]
Q5: There is corruption in my local public administration.C5: Trust–perception of corruption within local institutions.[34,40,41,42,43,44,45]
Source: [16].
Table 3. Transformation of response options for the satisfaction scale in Zürich.
Table 3. Transformation of response options for the satisfaction scale in Zürich.
Response OptionsQ1Q2Q3Q4Q5
Strongly Disagree2.83%3.65%2.36%0.73%31.20%
Somewhat Disagree11.75%19.71%17.06%8.46%13.46%
Somewhat Agree42.79%43.81%51.44%36.02%13.46%
Strongly Agree27.74%26.77%25.92%45.12%3.56%
99. Don’t know/No Answer/Refuses14.89%6.06%3.22%9.67%15.47%
Satisfaction scaleC1C2C3C4C5
Very unsatisfied2.83%3.65%2.36%0.73%3.56%
Rather unsatisfied11.75%19.71%17.06%8.46%13.46%
Rather satisfied42.79%43.81%51.44%36.02%36.31%
Very satisfied27.74%26.77%25.92%45.12%31.20%
99. Don’t know/No Answer/Refuses14.89%6.06%3.22%9.67%15.47%
Source: [16].
Table 4. Evaluation of satisfaction from local administration in European cities based on the B-TOPSIS method in 2023 year.
Table 4. Evaluation of satisfaction from local administration in European cities based on the B-TOPSIS method in 2023 year.
No.CityCountryCity GroupPopulation D + D Average
B-TOPSIS
Score (Ri)
SDRank
FSD
Rank
AFSD (0.25)
1.AalborgDKNorthern MS10.4140.7120.6430.02247
2.AmsterdamNLWestern MS40.4580.6520.5940.0191419
3.AnkaraTROther50.3880.6840.6440.01735
4.AntalyaTROther40.3790.7030.6590.01823
5.AntwerpenBEWestern MS30.4690.6420.5840.0211823
6.AthensELSouthern MS40.6070.5050.4460.0184961
7.BarcelonaESSouthern MS40.5130.5710.5270.0174149
8.BelfastUKOther20.4820.6400.5770.0212227
9.BelgradeRSWestern Balkans40.6350.4640.4100.0155265
10.BerlinDEWestern MS40.5570.5950.5170.0214251
11.BiałystokPLEastern MS20.4960.6830.5840.0251523
12.BolognaITSouthern MS20.5390.6300.5400.0233645
13.BordeauxFRWestern MS30.4560.6610.5980.0201317
14.BragaPTSouthern MS10.5760.6380.5240.0234049
15.BratislavaSKEastern MS20.5220.6310.5470.0223443
16.BruxellesBEWestern MS40.4250.6710.6170.019611
17.BucharestROEastern MS40.5500.5710.5000.0184454
18.BudapestHUEastern MS40.4640.6370.5890.0191622
19.BurgasBGEastern MS10.5000.6050.5450.0173343
20.CardiffUKOther20.4500.6860.6150.022812
21.Cluj-NapocaROEastern MS20.4570.6720.5960.0191318
22.CopenhagenDKNorthern MS40.4100.7110.6440.02146
23.DiyarbakirTROther40.5230.5530.5120.0174252
24.DortmundDEWestern MS30.5380.6270.5400.0223645
25.DublinIEWestern MS40.4650.6700.5970.0221318
26.EssenDEWestern MS30.4990.6470.5710.0212632
27.GdańskPLEastern MS20.4870.6740.5840.0231723
28.GenèveCHEFTA20.4410.6980.6200.022610
29.GlasgowUKOther40.4860.6410.5750.0202328
30.GrazATWestern MS20.4300.7030.6290.02258
31.GroningenNLWestern MS10.4160.6960.6450.02034
32.HamburgDEWestern MS40.5030.6680.5740.0232029
33.HelsinkiFINorthern MS40.4880.6420.5720.0212431
34.HeraklionELSouthern MS10.5980.5020.4560.0154860
35.IstanbulTROther50.5030.5800.5360.0173747
36.KošiceSKEastern MS10.5040.6440.5610.0212834
37.KrakówPLEastern MS30.5200.6350.5530.0223037
38.LefkosiaCYSouthern MS10.4940.6220.5610.0182734
39.LeipzigDEWestern MS30.5250.6530.5610.0242834
40.LiègeBEWestern MS20.4330.6710.6140.0191013
41.LilleFRWestern MS30.4810.6370.5760.0212328
42.LisboaPTSouthern MS40.5970.5880.4930.0224456
43.LjubljanaSIEastern MS20.5190.6300.5490.0213442
44.LondonUKOther50.5030.6260.5580.0212935
45.LuxembourgLUWestern MS10.3880.7450.6620.02222
46.MadridESSouthern MS50.5040.5960.5420.0193544
47.MálagaESSouthern MS30.4940.6030.5520.0183137
48.MalmöSENorthern MS20.4390.6650.6160.020712
49.ManchesterUKOther40.4600.6650.6010.0211216
50.MarseilleFRWestern MS30.5120.5950.5340.0183848
51.MiskolcHUEastern MS10.4740.6370.5840.0191823
52.MünchenDEWestern MS40.4770.6680.5910.0221421
53.NapoliITSouthern MS40.6450.4810.4220.0195163
54.OsloNOEFTA30.5100.6140.5490.0203341
55.OstravaCZEastern MS20.5120.6340.5510.0203239
56.OuluFINorthern MS10.4930.6480.5730.0222330
57.OviedoESSouthern MS10.5330.5600.5110.0184353
58.PalermoITSouthern MS30.6830.4670.4020.0215266
59.ParisFRWestern MS50.4700.6390.5820.0191924
60.Piatra NeamţROEastern MS10.5040.6230.5540.0203036
61.PodgoricaMEWestern Balkans10.5810.5250.4640.0164758
62.PrahaCZEastern MS40.5350.6260.5370.0223947
63.RennesFRWestern MS20.4400.6920.6200.02269
64.ReykjavíkISEFTA10.5430.5880.5200.0204150
65.RigaLVEastern MS30.5930.5250.4630.0184759
66.RomaITSouthern MS40.6820.4530.3960.0185367
67.RostockDEWestern MS10.5180.6640.5680.0242533
68.RotterdamNLWestern MS40.4590.6470.5930.0201420
69.SkopjeMKWestern Balkans20.6290.4630.4140.0145164
70.SofiaBGEastern MS40.5470.5550.4960.0174555
71.StockholmSENorthern MS40.4900.6460.5790.0212125
72.StrasbourgFRWestern MS20.4470.6810.6110.0221115
73.TallinnEEEastern MS20.5190.6310.5500.0213240
74.TorinoITSouthern MS30.6060.5510.4750.0214657
75.Tyneside conurbationUKOther30.4850.6410.5780.0212126
76.VallettaMTSouthern MS20.4130.6960.6450.02134
77.VeronaITSouthern MS20.5700.6160.5190.0234250
78.VilniusLTEastern MS30.5190.6360.5520.0213038
79.WarszawaPLEastern MS40.5370.6270.5390.0233646
80.WienATWestern MS40.4480.6960.6140.023914
81.ZagrebHREastern MS30.6320.4860.4250.0185062
82.ZürichCHEFTA30.3860.7620.6780.02411
Notes: Country codes: BE: Belgium; BG: Bulgaria; CZ: Czechia; DK: Denmark; DE: Germany; EE: Estonia; IE: Ireland; EL: Greece; ES: Spain; FR: France; HR: Croatia; IT: Italy; CY: Cyprus; LV: Latvia; LT: Lithuania; LU: Luxembourg; HU: Hungary; MT: Malta; NL: Netherlands; AT: Austria; PL: Poland; PT: Portugal; RO: Romania; SI: Slovenia; SK: Slovakia; FI: Finland; SE: Sweden; ME: Montenegro; AL: Albania; CH: Switzerland; IS: Iceland; MK: North Macedonia; NO: Norway; RS: Serbia; TR: Türkiye; UK: United Kingdom; MS: Member States. Population: 1: Less than 250,000 inhabitants; 2: Between 250,000 and 500,000 inhabitants; 3: Between 500,000 and 1,000,000 inhabitants; 4: Between 1,000,000 and 5,000,000 inhabitants; 5: More than 5,000,000 inhabitants.
Table 5. Classification of cities into the categories of quality and their changes between the 2019 and 2023 surveys.
Table 5. Classification of cities into the categories of quality and their changes between the 2019 and 2023 surveys.
Class2019 Survey2023 Survey
HighAalborg, Amsterdam, Antalya, Bordeaux, Bruxelles Cardiff, Copenhagen, Dublin, Genève, Glasgow, Graz, Groningen, Liège, London, Luxembourg, Malmö, Manchester, Miskolc, München, Rennes, Rotterdam, Tyneside conurbation, Valletta, Wien, ZürichAalborg, Ankara, Antalya, Bruxelles, Cardiff, Copenhagen, Genève, Graz, Groningen, Liège, Luxembourg, Malmö, Rennes, Strasbourg, Valletta, Wien, Zürich
MediumAnkara, Antwerpen, Athens, Barcelona, Belfast, Belgrade, Berlin, Białystok, Bologna, Braga, Bratislava, Bucharest, Budapest, Burgas, Cluj-Napoca, Diyarbakir, Dortmund, Essen, Gdańsk, Hamburg, Helsinki, Heraklion, Istanbul, Košice, Kraków, Lefkosia, Leipzig, Lille, Lisboa, Ljubljana, Madrid, Málaga, Marseille, Napoli, Oslo, Ostrava, Oulu, Oviedo, Paris, Piatra Neamţ, Podgorica, Praha, Reykjavík, Riga, Rostock, Skopje, Sofia, Stockholm, Strasbourg, Tallinn, Torino, Verona, Vilnius, Warszawa, ZagrebAmsterdam, Antwerpen, Athens, Barcelona, Belfast, Belgrade, Berlin, Białystok, Bologna, Bordeaux, Braga, Bratislava, Bucharest, Budapest, Burgas, Cluj-Napoca, Diyarbakir, Dortmund, Dublin, Essen, Gdańsk, Glasgow, Hamburg, Helsinki, Heraklion, Istanbul, Košice, Kraków, Lefkosia, Leipzig, Lille, Lisboa, Ljubljana, London, Madrid, Málaga, Manchester, Marseille, Miskolc, München, Napoli, Oslo, Ostrava, Oulu, Oviedo, Paris, Piatra Neamţ, Podgorica, Praha, Reykjavík, Riga, Rostock, Rotterdam, Skopje, Sofia, Stockholm, Tallinn, Torino, Tyneside conurbation, Verona, Vilnius, Warszawa, Zagreb
PoorPalermo, RomaPalermo, Roma
Legend: Green: improvement (shift to a higher class); Blue: deterioration (decline to a lower class) in satisfaction with public administration in 2023 compared to 2019.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roszkowska, E.; Wachowicz, T.; Michalska, E. Sustainable Cities and Quality of Life: A Multi-Criteria Approach for Evaluating Perceived Satisfaction with Public Administration. Sustainability 2025, 17, 10106. https://doi.org/10.3390/su172210106

AMA Style

Roszkowska E, Wachowicz T, Michalska E. Sustainable Cities and Quality of Life: A Multi-Criteria Approach for Evaluating Perceived Satisfaction with Public Administration. Sustainability. 2025; 17(22):10106. https://doi.org/10.3390/su172210106

Chicago/Turabian Style

Roszkowska, Ewa, Tomasz Wachowicz, and Ewa Michalska. 2025. "Sustainable Cities and Quality of Life: A Multi-Criteria Approach for Evaluating Perceived Satisfaction with Public Administration" Sustainability 17, no. 22: 10106. https://doi.org/10.3390/su172210106

APA Style

Roszkowska, E., Wachowicz, T., & Michalska, E. (2025). Sustainable Cities and Quality of Life: A Multi-Criteria Approach for Evaluating Perceived Satisfaction with Public Administration. Sustainability, 17(22), 10106. https://doi.org/10.3390/su172210106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop