Next Article in Journal
Dual Effects of N-Butanol and Magnetite Nanoparticle to Biodiesel-Diesel Fuel Blends as Additives on Emission Pattern and Performance of a Diesel Engine with ANN Validation
Next Article in Special Issue
Investigating the Serially Mediating Mechanisms of Organizational Ambidexterity and the Circular Economy in the Relationship between Ambidextrous Leadership and Sustainability Performance
Previous Article in Journal
Study on the MOF Frame Pt-TiO2 Hybrid Photocatalyst and Its Photocatalytic Performance
Previous Article in Special Issue
The Mechanism Underlying the Sustainable Performance of Transformational Leadership: Organizational Identification as Moderator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conceptualizing and Validating a Model for Benchlearning Capability: Results from the Greek Public Sector

by
Eftychia Kessopoulou
*,
Katerina Gotzamani
,
Styliani Xanthopoulou
and
George Tsiotras
Business Excellence Laboratory, Department of Business Administration, School of Business Administration, University of Macedonia, 54636 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(2), 1383; https://doi.org/10.3390/su15021383
Submission received: 8 November 2022 / Revised: 17 December 2022 / Accepted: 6 January 2023 / Published: 11 January 2023

Abstract

:
This paper aims to report on the development and assessment of a conceptual model for benchlearning capability, which facilitates sustainable performance improvement. Following an in-depth literature review, two main dimensions of benchlearning capability were identified. A focus group approach was used in order to establish the connection between these dimensions and the main construct (benchlearning capability). A questionnaire was designed and administered to 502 individuals from 74 organizations that used the Common Assessment Framework, and a total of 163 respondents replied. For the structural model assessment, the PLS-SEM technique was employed. Literature reveals that benchlearning encompasses both comparative evaluation and organizational learning mechanisms. Moreover, the focus group concluded that Organizational Learning Capability (OLC) and Benchmarking Capability (BMKC) are positively related to Benchlearning Capability (BLNC). The quantitative analysis showed that the factor OLC and BMKC are related positively and significantly to BLNC. This paper is the first attempt to approach the benchlearning capability construct and to validate its model. It is also a first attempt towards providing empirical evidence that could help public managers understand the BLNC concept and formulate accordingly the appropriate strategy for improving the benchlearning capability and therefore achieving sustainable performance in their organizations.

1. Introduction

Public organizations which are knowledge-based and knowledge-intensive explore efficient management methods and sustainable development tools to perform successfully in a dynamic organizational environment [1]. The introduction of quality in the public sector came as a response to the problems of inefficiency, wastefulness, and citizen-detached services that characterized public organizations [2], as well as to the increasing public pressure challenges [3,4] and requirements of the globalized environment [5]. Excellence models, customer satisfaction measurement and/or self-assessment tools that were used in the private sector, as well as quality management concepts and principles, were regarded as appealing for adoption in the public sector in order to facilitate resolving its malfunctions. Their implementation, though, brought inconsistent findings and therefore raised a note of concern about their alleged benefits in the public sector [6,7,8]. As a result, criticisms that call for a more critical adaptation of these tools in the public sector context increased [9,10,11], possibly due to the differences between the public and private sectors [12,13].
Addressing this call, the European Institute of Public Administration developed a total quality management (TQM) tool for the public sector, known as the Common Assessment Framework (CAF), which draws on the European Foundation of Quality Management (EFQM) model [14]. In this case, the use of CAF by public organizations could be claimed to transcend the level of imitation of private sector tools since it is an adapted tool for the public sector. The CAF employs a structured framework for self-assessing the ability of public organizations to embrace TQM and undertake benchlearning activities in order to achieve leapfrog improvements in quality and performance [15]. Specifically, the latest version of CAF 2020 focuses on digitization, agility, sustainability, innovation, collaboration and diversity. Additionally, organizational learning capability is an important element for achieving sustainable organizational performance in a rapidly changing organizational environment [1].Thus, it could also be regarded as an organizational learning tool that provides a structured and systematic learning framework through which an organization introspects on its functions, learns from the experiences of other public entities, and prepares action plans for sustainable performance improvement.
Benchlearning is a further conception of the already established business term of benchmarking, which means “…the search for industry best practices that lead to superior performance” [16] (p. 12). Benchlearning refers to the sharing and comparing of experiences and knowledge inside and/or outside organizations through a process of collaborative learning for the purpose of identifying and establishing best practices [17]. It is a methodology that includes social aspects of organizational learning since organizations or parts of them are called to cooperate with others after undertaking self-assessment activities of their performance.
Up until now, there has been a gap in the analysis of this theoretical construct, limiting the ability of public managers to exploit its full potential. Extant research in the public sector approaches the concept of benchlearning from different viewpoints, drawing mainly on existing benchmarking implementations. According to Ammons and Roenigk [18], distinct types of these implementations are the comparison of performance statistics, visioning initiatives and best practice benchmarking. Best practice benchmarking seems to be closer to the concept of benchlearning. It could be found informally when public organizations adopt a similar approach to best performers in hopes of achieving improved results or formally when they follow a structured benchmarking approach to learn from others [18]. In the first choice, a behavioural benchlearning approach of mimetic learning from others is highlighted, whereas, in the latter option, a more cognitive benchlearning viewpoint takes place. Furthermore, other researchers consider benchlearning from a social approach [19,20,21], where benchlearning is positioned as a process of learning with others that share mutual problems.
Although research of existing public sector applications in best practice benchmarking started to shed light on the organizational learning mechanism of benchlearning [18], there are still certain aspects that need further analysis. The added value of this research is to identify the factors of the benchlearning construct and address its multidimensionality, as well as transcend the limits of overemphasizing the "marking" side of this methodology, which mainly aims to compare performance data and information rather than address any knowledge issues. The idea that organizational learning takes place when undertaking best practice benchmarking in the public sector comes up against the lack of research for a conceptual model that could facilitate public managers and researchers in identifying the factors that synthesize the organizational benchlearning construct and additionally in developing potential interventions. Moreover, there is a need to consider the multidimensionality of the benchlearning construct since the organizational learning aspect is discussed in the literature in behavioral, cognitive and social terms [22]. Therefore, the purpose of this research is to develop and validate a conceptual model that respects the multidimensional nature of benchlearning.
The paper proceeds as follows. We first discuss the concepts of organizational learning and learning organization by emphasizing the theory and practice of the process of learning. Then we describe the context through which public organizations in Greece get acquainted with benchlearning. We continue with approaching the benchlearning construct, highlighting its different dimensions, and with the help of a focus group, mapping its underlying relationships with learning and benchmarking. Then, the conceptual model is developed and assessed and finally, we discuss the results and implications of the study.

2. Reviewing the Literature

2.1. The Learning Organization and Organizational Learning Approaches

The idea of the learning organization (LO) is relatively new, attracting the attention of practitioners primarily, with Garratt [23] being one of the earliest contributors. The notion, though, is practically established with the work of Senge [24] (p.3), who defines it as "an organization where people continually expand their capacity to create results they truly desire, where new and expansive patterns of thinking nurtured, where collective aspiration is set free, and where people are continually learning how to learn together”. Many authors [25,26,27,28,29] evaluated the learning differences between various LOs, in an attempt to model a learning organization by assigning characteristics (skills and abilities) that it should possess. Rebelo and Gomes [30] discuss the limited practical importance of such normative attempts, whereas Grieves [31] (p. 470) debates whether searching for a specific agent’s attributes is of any worth since different individuals share different interests. However, they all agree that if an organization is a learning organization or not would imply that it has reached or hasnot reached the ideal. Therefore, there is a paradigmatic shift in the literature from the content of the learning organization to the process of organizational learning that could lead organizations to the ideal.
Cyert and March [32] first claimed that an organization could learn independently from its members. Since then, considerable research has taken place on the study of the learning processes and capabilities in an organizational context [33,34,35,36]. Literature regards organizational learning and its outcome organizational knowledge as process and content, respectively, encompassing cognitive, behavioral, and social aspects [22,37,38]. These three perspectives explain the complexity of the learning process and content. The behavioral dimension emphasizes the importance of experiencing the consequences of a specific behavior in order to learn [39]. The cognitive perspective describes an entity’s ability to articulate and codify knowledge, to "find and fix errors" by reflecting on its own "thinking, reasoning and memory" [40]. On the other hand, the social approach enriches organizational learning with the integration of both cognitive and behavioral aspects meanwhile interpreting its relativistic (non-individual) character in the learning process [41]. This means that due to the interaction between organizations, there is a potential advancement to collective learning (sharing better practices). Hawkins [42] refers to the complexity of shifting the learning interest from individuals to the organization, and Hedberg [43] describes their synergistic nature by suggesting the fallacy of equating organizational learning with the cumulative learning result of organizational members. Therefore, organizations seem to share different potentials for learning that is considered complex and dynamic concept, simple part or core element of organizational processes.

2.2. The Common Assessment Framework, Self-Assessment, Organizational Learning and “Benchlearning”

The CAF aims to provide a "common language" for assessment and convergent improvements among European public administrations [44]. By undertaking a self-assessment exercise, an organization can acquire insights into its strengths and weaknesses and set the background for further improvements [45]. One way to proceed with organizational improvements after a self-assessment exercise might be through the process of learning. Balbastre and Luzón [46] studied the links between self-assessment and learning and underlined the positive effects of the former on the latter under specific conditions. Thus, self-assessment quality tools could facilitate organizational change and improvement through organizational learning.
The 2013 update of the CAF model [47] underlines the importance of organizational learning by regarding benchlearning as one of its core purposes. The CAF examines the organizational environment by focusing on nine criteria forming two broader categories: enablers and results. Its basic idea draws on the logic of means-ends relationships, achieving excellence (for people, citizens, society and performance) through a set of facilitators (leadership, people, processes, strategy and planning, partnerships and resources). Usually, an organization’s employees undertake the assessment and get involved in the decision-making process of change through the development, implementation and review of action plans. In this case, employees are exposed to learning "how to improve through sharing knowledge, information and sometimes resources" [14].

2.3. Benchmarking, Organizational Learning and Benchlearning

Literature underlines a rational approach taking place in benchmarking practice, which is used in order to control the activity [48,49], as well as a rather technical (engineering) perspective that overemphasizes measurement [50,51,52,53,54]. According to Κarlöf et al. [17], the first step in the benchmarking process is a survey, followed by a comparison between organizations, and then organizations could proceed with understanding the reasons for performance gaps and develop improvements. This "understanding" process step includes the "lessons learnt", which means that the knowledge acquired is distinct from simple data and information comparison. However, not all organizations undertaking benchmarking exercises intend to learn. Especially in the public sector, according to Bowerman et al. [55], benchmarking could be implemented defensively in the sense of avoiding criticisms rather than learning from or with others.
However, benchmarking and benchlearning could be linked to organizational learning activities [15] (p. 83). Benchmarking literature, especially the one focusing on the public sector, provides examples of behavioral, cognitive and social aspects of organizational learning. Early publications embrace the reasons that private sector organizations had to "search for industry’s best practices that lead to superior performance" [16] (p. 10) in order to imitate the successful behavior of others [56]. Therefore, benchmarking was considered a process of identifying strengths, weaknesses and standards through the measurement and comparison of products, services and processes with best performers [16,57]. Under the public sector benchmarking umbrella, this was translated into an efficiency rigor mandating the use of metrics, yardsticks and league tables as a response to government requirements or just for defensive reasons [55]. This approach could imply a behavioral learning thesis of best practice acknowledgment and adoption, where public organizations conform to isomorphic pressures rather than search for improvement and excellence.
This trend soon declined since no organization is similar to another [58], and the results of benchmarking exercises were highly debatable, especially when the complexities of the public sector were to be considered [21,59,60]. Therefore, it soon became evident the need for codifying information and knowledge of best performers, which actually means putting learning into a more structured framework and managing knowledge so as to adapt best practices to each context. Publications started to shed more light on the cognitive aspects of learning, emerging during a benchmarking process. The American Productivity and Quality Center [61] regarded benchmarking as "the process of continuously comparing and measuring …. to gain information that will help the organization take action to improve its performance" and Boxwell [62] (p. 17) as a goal-setting activity based on "objective, external standards and learning from others". In the public sector, researchers pointed out the identification, sharing and adaptation of "best practices" [38,55,63,64,65] among public organizations, as well as an approach that favors collaboration [65,66] and peer-to-peer learning [67]. Therefore, the acknowledgment of social elements, such as dialogue, knowledge sharing, collaboration and interaction in the field of benchmarking research, is highlighted.
Karlöf and Östblom [56] (p. 182) attempted to emphasize the learning and improvement aspects in the measurement tool of benchmarking and devised the term “benchlearning”. Benchlearning is a more human-centric management approach that corresponds to complex economic conditions [17]. Such conditions, as well as great heterogeneity, characterize the public sector, which needs a solution for the difficulties of managing best practices. Benchlearning is a process of comparative evaluation as well as comparative learning, a tool “designed to develop the ability of companies and individuals to use stored knowledge…” as well as a process to reach the ideal of the learning organization [17] (p. 96), which is meanwhile an ideal for public services [60]. Therefore, benchlearning institutionalizes the theory and practice of the "learning" process through the development of competencies and skills [19] so as to accumulate the experiences of other organizations. This implies that benchlearning incorporates both the content of the ideal model (the best performer/benchlearner) as well as that of the process (benchlearning) towards achieving it.

2.4. The Capability of Organizations for Benchlearning

Like every organizational phenomenon, benchlearning needs some form of conceptualization. A traditional way in literature to approach the benchlearning construct focuses on the process itself or the process outcomes but without providing a validated framework of the facilitating factors for benchlearning, or else, one that models an organization’s propensity to bench learn. For example, Johnstad and Berger [68] describe how a benchlearning project was organized, and Karlöf et al. [17] describe it as a combination of business development and organizational learning. Batlle-Montserrat et al. [69] report evidence that benchlearning facilitated smart cities to identify good practices, learn from them and improve some of their e-services, and Torbjorn [70] reports that an international benchlearning program between two countries provided new perspectives on competencies and industrial development capabilities. Benchlearning, though could happen besides the accomplishment of its observable outcomes, but not by neglecting the capability of organizations to do so [17]. By neglecting the facilitating factors, we limit our understanding of the impact that an organization’s potential could have on the benchlearning process. Therefore, the study of the facilitating factors for benchlearning becomes an important aspect of research and could start with approaching the two major dimensions of this construct: organizational learning and benchmarking.
With reference to organizational learning, Garvin [25] challenges the idea of building a learning organization in the short term and proposes the development of an environment that facilitates learning. Moreover, DiBella et al. [71] stress the importance of preferences, attitudes and values to explore the organizational capability to retrieve, transfer and use knowledge. At the same time, the authors emphasize the impact of an organization’s potential on the learning process and various researchers [25,72,73,74,75] use the term organizational learning capability (OLC) as a concept that reflects all those organizational characteristics, namely culture, structure, processes and practices that either foster or do not hinder learning. Therefore, OLC focuses on an organization’s potential “abilities” to learn in order to become a learning organization.
Among significant studies concerning the factors of OLC [38,76,77,78], the work of Jerez-Gomez et al. [79] gained wide acceptance. They reviewed the literature on organizational learning critically and developed a new model that communicates cognitive, behavioral and social characteristics of organizational learning and is comprised of the following capabilities:
  • The capability of management to foster a learning culture (managerial commitment—MC)—includes long-term organizational learning and managerial efforts to instill the importance of learning among its members as well as to discard perceptions that delay organizational success.
  • The capability of the organization’s members to influence each other under a “shared vision, mission and practices” (systems perspective—SP)—each member understands the collective goals and his/her position and acts respectively in the organizational network of implicit and explicit relationships, enabling the transition from individual to collective learning.
  • The capability of the organization to promote “generative learning” (openness and experimentation—OE)—addresses an interactive experience with the environment and encompasses acceptance of new values, perceptions, attitudes and practices (openness) as well as risk tolerance and creativity.
  • The capability of the organization to create enabling processes and structures of “knowledge transfer and integration” (KTI)—this means that by incorporating social elements (communication, dialogue and debate) and technological facilitators (information systems), the organizational members interact in order to develop "organizational memory" [36] and collective knowledge.
With respect to benchmarking, the CAF tool that is widely used in the European public sector is a framework that assesses important organizational aspects in order to diagnose the potential of an organization to compare its performance with others [47]. Moreover, the Benchmarking Capability Tool, developed by the Infrastructures and Project Authority [80] in the UK, provides a useful way that allows organizations to identify and score their capability and maturity for benchmarking by assessing benchmarking strategy, people-culture-process, data and systems, insights and analysis. Various researchers underlined the importance of an organizational environment that facilitates benchmarking exercises [16,17,56,57], and the term benchmarking capability is implied in various studies [55,81,82,83] showing the potential of an organization for benchmarking. Kurnia et al. [84] (p. 6) approach benchmarking capability in the context of sustainability and define it as “the ability of an organisation to compare the sustainability performance across various units (internal) and supply chain members (external)”.
Following the literature review, the aspects of benchmarking capability that are of great importance when undertaking benchmarking activities are: (a) the capability of the organization’s members to pursue the satisfaction of (internal and external) customers’ needs and requirements [16,56,57], (b) the capability of the organization to foster continuous improvement [16,55,56,57,61,62,64,65], (c) the capability of the organization to promote internal best practice comparison [16,57] and (d) the capability of the organization to promote best practice comparison with the external environment [16,55,56,57,61,62,64,65]. Therefore, the capability of the organization for the benchmarking process could be defined as the capability to enable evaluation and management of best practices in order to continuously improve its performance meanwhile satisfying customers’/citizens’ needs.

2.5. Hypotheses Development

Drawing on the abovementioned analysis, the two factors identified for the benchlearning construct are organizational learning capability and benchmarking capability. Since there is no previous study, a focus group was used in order to relate the factors of the construct. The focus group comprised five (5) experts, who were selected because of their experience in applying the CAF and the benchlearning process. Experts were contacted by email and received the interview protocol and an informed consent form, which reassured their confidentiality and anonymity. The focus group interview was conducted online on 10 December 2019, with the contribution of a facilitator and an observer from the research team. The focus group method was employed at this stage because, according to Acoccela [85], it could provide a deeper understanding of a new topic, such as benchlearning capability. Moreover, it facilitated the understanding, interpretation, and analysis of members’ perspectives more quickly, easily, and cost-effectively as Morgan [86] and Wang and Wiesemes [87] suggest.
Presumably, the research hypotheses developed are the following:
  • H1:Benchmarking Capability (BMKC) is positively related to Benchlearning Capability (BLNC);
  • H2:Organizational Learning Capability (OLC)is positively related to BLNC;
    o
    H2a:MC is positively related to OLC;
    o
    H2b:OE is positively related to OLC;
    o
    H2c:SP is positively related to OLC;
    o
    H2d:KTI is positively related to OLC.

3. Methodology

This study seeks to measure the benchlearning capability among public organizations in Greece that have undertaken a CAF exercise. CAF self-assessed organizations are chosen not out of convenience but because, in Greece, public organizations are familiar with benchlearning through the use of CAF. To address this purpose, a questionnaire was developed, employing the comparison and learning mechanisms that emerged from the literature review. Figure 1 below describes the research methods used and the analysis implemented for this study.

3.1. Developing the Instrument

Drawing on the literature review and the focus group findings, the benchlearning capability construct emerges from benchmarking and learning. Pre-eminent items of benchmarking propensity, according to the literature [16,17,56,57], are the following: (a) customer satisfaction perspective in all parts of the organization, (b) continuous improvement-led organizational culture, (c) inter-organizational best practice knowledge-sharing, and (d) continuous searching for new processes and practices outside the organization and learning from others. Moreover, organizational learning capability is comprised of the capabilities relevant to management commitment, systems perspective, openness and experimentation, knowledge transfer and integration [79]. Like OLC, the Benchlearning Capability (BLC) could describe organizational characteristics that foster benchlearning. The final “benchlearning capability” construct consists of 20 questions and assesses management commitment (questions 1–5), systems perspective (questions 6–8), openness and experimentation (questions 9–12), knowledge transfer and integration (questions 13–16) and benchmarking capability (questions 17–20). The questions concerning benchmarking capability are the following (Table 1):
The instrument was applied using a 5-point Likert scale ranging from 1 to 5, where 1 represents "strongly disagree" and 5 "strongly agree". Further pilot-testing of the instrument was conducted by interviewing 5 public sector employees to ensure the accuracy of translating the instrument into the Greek language in order to be comprehensible and adequately reflect the aspects measured.

3.2. Data Collection

Since 2003 the Department of Quality and Efficiency of the Hellenic Ministry of Interior, Public Administration and Decentralization institutionalized the voluntary implementation of the CAF model among public organizations. At the time of the research, the population of CAF users in the Greek public administration amounted to 74 public organizations (users) and 502 individuals (members of CAF implementation teams). A total of 163 respondents (2–3 individuals from each organization) replied from January 2021 to May 2021, giving a response rate of 32.47%.
With reference to the profile of CAF users (first part of the questionnaire), which include 10,81% Ministries, 9.455% Decentralized Administrations, 9.455% Regions, 47.30% Municipalities, 6.76% Independent Authorities and 16.22% Hospitals, we observed that: (a) 58.3% declared the partial implementation of CAF, whereas 41,7% implemented CAF in all parts of the organization, (b)70.8% used the simple marking method compared to 29.2% of the analytic one, (c) most of the respondents (67%) employ 10 to 50 individuals, (d) 71% of them applied CAF only once, (e) for most of them a 4-year period or more passed since their last implementation, and (f) most of them have also appliedthe “management by objectives” improvement tool. Moreover, the role of the respondents in the CAF implementation teams is as follows: 42% are simple team members, 29% are team facilitators, and 29% are team leaders.

3.3. Data Analysis

The first step is the examination of content and face validity. Every generated instrument commits to an appropriate procedure of development and to be representative of the concept in question so as to claim content validity. The criteria, though to evaluate the latter, are essentially intuitive, yet missing an objective quantification [88,89]. In our case, the sufficiency of content validity is grounded on previous theoretical and empirical research [17,19,57,90,91,92], as well as discussion with experts on a focus group and on the pre-test procedure of the BLNC model.
The second step is the estimation of construct validity. For this reason, we used the “Smart PLS 3.2” software for the assessment of measurement and structural model by means of partial least squares (PLS) structural equation modeling (SEM) [93]. PLS-SEM gained increased popularity over the years in social sciences [94] and recently in OLC research [95,96]. We selected the PLS-SEM because it has several advantages when compared to the traditional covariance-based SEM techniques. For example, there are no distributional assumptions of normality, while it can be used to analyze data from small samples. In addition, PLS-SEM incorporates both formative and reflective constructs and hierarchical component models (HCMs). HCMs give us a chance to reduce the number of relationships in the structural model, making the PLS path model more parsimonious and easier to grasp [94]. Drawing on Hair et al. [97], our conceptual model (path model) connects variables and constructs based on theory and logic. BLNC was operationalized as a “reflective-formative” higher-order component. The hierarchical component measurement model was created by using the “repeated indicators approach” combined with the “two-step approach” [94,98]. Specifically, BLNC consisted of the MC, SP, OE, KTI, and BMKC. The reflective-formative HCM and the proposed model are depicted in Figure 2. For reflective indicators, convergent validity and reliability were estimated with the use of AVE, Cronbach α and composite reliability (CR) [94]. For discriminant validity, we used the "Fornell-Larcker" and the "Heterotrait-Monotrait ratio” (HTMT < 0.85 or 0.9) criteria.
For the formative indicator (BLNC), we examined the "multicollinearity" by the "Variance Inflation Factors" (VIF) [99] (VIF < 3.33 or 5.0).
Furthermore, we used the R-square of the dependent variable [100] and the Stone-Geisser Q-square test for the predictive relevance [94] of the quality of the structural model. For the Stone-Geisser Q-square test, two separate analyses with 7 and 25 omission distances were undertaken (blindfolding technique in SmartPLS) to test the stability of the findings (Q-squares > 0). Aligning with [101], we chose not to include the goodness-of- fit (GoF) as a criterion for PLS-SEM because it is believed that it is not able to separate valid models from invalid ones, while it is not applicable to formative measurement models [94,102].
Finally, the bootstrapping procedure was applied (500 randomly drawn samples) for the hypotheses testing.

4. Results

4.1. The Proposed Model with Loadings

Figure 2 below depicts the proposed model with the reflective and formative constructs and loadings. As observed, all loadings are over the threshold value (0.50).

4.2. Construct Validity and Reliability of the Measurement Model

Table 2 shows that all factor loadings, AVE and CR scores were at the acceptable level (factor loadings > 0.5, Cronbach α > 0.7, AVE > 0.5, CR > 0.7).
Moreover, discriminant validity was achieved, as Fornell-Larcker (Table 3) and HTMT criteria (Table 4) were satisfied.
Thus, we may conclude that construct validity was achieved.

4.3. Structural Model Assessment

The result of the VIF estimation is acceptable since VIF values are below the threshold value of 5 (Table 5).

4.4. Quality of the Structural Model Assessment

In our case, the R2 value for the BLNC construct was adequate. Regarding the separate analyses with 7 and 25 omission distances (blindfolding technique in SmartPLS), the values were stable for both omission distances, and all of the Q2 was greater than zero (0.983 and 0.998, respectively). Thus, there is evidence that the model was stable, and the predictive relevance requirement was satisfied.

4.5. Structural Model Analysis

For the structural model analysis, the bootstrapping procedure was applied (500 randomly drawn samples) (Table 6 and Figure 3). Table 6 presents that BMK, KTI, MC, OE, and SP are positively and significantly related to BLN.

5. Discussion and Conclusions

The study of benchmarking in the public sector has become an important field of research over the last few years, providing useful insights on how public organizations could assess their performance against one of the best performers in order to save money, time and resources and learn to make leapfrog and sustainable performance improvements meanwhile providing qualitative services. However, the link between organizational learning, benchmarking, and benchlearning come up against the lack of research that should have been carried out for validating a conceptual framework of the facilitating factors for benchlearning or else one that models an organization’s propensity to benchlearn. Benchlearning is a new area of research, and the benchlearning capability construct is not very well-defined. Prior to this work, all research considered benchlearning factors separately, and little research has been developed on the topic. Therefore, this study represents an initial assessment and validation of the benchlearning capability model in public organizations.
We chose the CAF self-assessed public organizations since, in Greece, it is the only group of organizations that is familiar with benchlearning. The CAF model provides a structured method for diagnosing strengths and weaknesses that become an input in benchlearning exercises. The benchlearning capability addresses the capability of organizations to make improvements that mirror organizational needs and fulfill their potential by uncovering the importance of learning and comparing. The benchlearning capability construct incorporates various approaches encompassing cognitive, behavioral and social characteristics of organizational learning as well as the comparability and relativity elements of benchmarking. Therefore, in order to assess the benchlearning capability model, we used the construct of [79], which approaches the OLC, and enriched it with benchmarking elements.
In order to validate the model, we examined the content and face validity, drawing on previous theoretical and empirical research, the focus group findings and on the pre-testing results of the pilot questionnaire. Moreover, we studied the construct’s validity using PLS-SEM due to its increased popularity and advantages compared to other traditional SEM techniques. Additionally, discriminant validity was achieved since the Fornell-Larcker and HTMT criteria were satisfied. As far as the quality of the structural model is concerned, we used the R-square and the Stone-Geisser test to evaluate it.
Regarding the research implications of this study, this research attempts to fill the gap in the literature by conceptualizing and validating a BLNC construct under the public sector context, which will be useful for conducting a further investigation. Since the process of conceptualization and validation is at the epicenter of theory building [103,104], this research contributes towards the theoretical advancement of organizational learning in general [103]. Additionally, the research results are useful for public sector executives providing a BLNC tool that can: (i) diagnose the organization’s strengths and weaknesses and (ii) facilitate the formulation of an appropriate strategy and action plans for sustainable performance improvement.
There are though certain limitations associated with the empirical research part. First of all, the questionnaire captured employees’ perceptions. This leads to results that portray employees’ views rather than those of the organizations involved. We minimized this drawback by pre-testing the questionnaire, but personal perceptions still remain a problem in both reliability and validity analyses. Moreover, the items used for the scale development were adapted to our case, and this means that we could not benefit from the validation made by Jerez-Gomez et al. [79]. Additionally, the study is undertaken in a specific context, which, besides minimizing external influences, limits though external validity. Expanding the borders of the study outside the Greek public domain would allow for more generalizable results and conclusions. In addition, a broader sample of participants could give a broader view of the benchlearning capability levels in the Greek public sector. Therefore, further research could be done in other contexts and with larger sample sizes in order to establish the general validity and reliability of the benchlearning scale.

Author Contributions

Conceptualization, E.K.; methodology, S.X.; software, E.K. and S.X.; validation, S.X.; formal analysis, E.K. and S.X.; investigation, E.K.; resources, E.K.; data curation, E.K.; writing—original draft preparation, E.K.; writing—review and editing, G.T., K.G. and E.K.; visualization, E.K.; supervision, G.T. and K.G.; project administration, G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the reviewers for their useful comments and suggestions.

Conflicts of Interest

To the best of our knowledge, the named authors have no conflict of interest, financial or otherwise.

References

  1. Kordab, M.; Raudeliūnienė, J.; Meidutė-Kavaliauskienė, I. Mediating role of knowledge management in the relationship between organizational learning and sustainable organizational performance. Sustainability 2020, 12, 10061. [Google Scholar] [CrossRef]
  2. Walsh, K. Quality through markets: The new public service management. In Making Quality Critical: New Perspectives on Organizational Change; Wilkinson, A., Willmott, H., Eds.; Routledge: London, UK, 1995; pp. 82–104. [Google Scholar]
  3. Freeman-Bell, G.; Grover, R. The use of quality management in local authorities. Local Gov. Stud. 1994, 20, 554–569. [Google Scholar] [CrossRef]
  4. Randall, L.; Senior, M. A model for achieving quality in hospital hotel services. Int. J. Contemp. Hosp. Manag. 1994, 6, 68–74. [Google Scholar] [CrossRef]
  5. Sinha, M. Gaining perspectives: The future of TQM in public sectors. TQM Mag. 1999, 11, 414–419. [Google Scholar] [CrossRef]
  6. Boyne, G.A.; Walker, R.M. Total quality management and performance: An evaluation of the evidence and lessons for research on public organizations. Public Perform. Manag. Rev. 2002, 26, 111–131. [Google Scholar] [CrossRef]
  7. Redman, T.; Mathews, B.; Wilkinson, A.; Snape, E. Quality management in services: Is the public sector keeping pace? Int. J. Public Sect. Manag. 1995, 8, 21–34. [Google Scholar] [CrossRef]
  8. Barouch, G.; Kleinhans, S. Learning from criticisms of quality management. Int. J. Qual. Serv. Sci. 2015, 7, 201–216. [Google Scholar] [CrossRef]
  9. Walsh, K. Quality and public services. Public Adm. 1991, 69, 503–514. [Google Scholar] [CrossRef]
  10. Swiss, J. Adapting total quality management (TQM) to government. Public Adm. Rev. 1992, 52, 356–362. [Google Scholar] [CrossRef]
  11. Radin, B.A.; Coffee, J.N. A critique of TQM: Problems of implementation in the public sector. Public Adm. Q. 1993, 17, 42–54. [Google Scholar]
  12. Dewhurst, F.; Martínez-Lorente, A.R.; Dale, B.G. TQM in public organisations: An examination of the issues. Manag. Serv. Qual. Int. J. 1999, 9, 265–273. [Google Scholar] [CrossRef]
  13. Tarí, J.J. Self-assessment exercises: A comparison between a private sector organisation and higher education institutions. Int. J. Prod. Econ. 2008, 114, 105–118. [Google Scholar] [CrossRef]
  14. CAF Resource Center. The Common Assessment Framework (CAF). Improving and Organisation through Self-Assessment. 2006. Available online: https://www.eupan.eu/wp-content/uploads/2019/05/EUPAN_CAF_2006_The_Common_Assessment_Framework_CAF_2006.pdf (accessed on 1 October 2019).
  15. CAF Resource Center. CAF 2020—The European Model for Improving Public Organisations through Self-Assessment. 2020. Available online: https://www.eupan.eu/wp-content/uploads/2019/11/20191118-CAF-2020-FINAL.pdf (accessed on 20 December 2019).
  16. Camp, R. Benchmarking: The Search for Industry Best Practices that Lead to Superior Performance; Quality Press: Milwaukee, WI, USA, 1989. [Google Scholar]
  17. Karlöf, B.; Lundgren, K.; Froment, M.E. Benchlearning: Good Examples as a Lever for Development; John Wiley & Sons: Chichester, UK, 2001. [Google Scholar]
  18. Ammons, D.; Roenigk, D. Benchmarking and interorganizational learning in local government. J. Public Adm. Res. Theory 2014, 25, 309–335. [Google Scholar] [CrossRef]
  19. Kyrö, P. Revising the concept and forms of benchmarking. Benchmarking Int. J. 2003, 10, 210–225. [Google Scholar] [CrossRef]
  20. Ellis, J. All inclusive benchmarking. J. Nurs. Manag. 2006, 14, 377–383. [Google Scholar] [CrossRef] [PubMed]
  21. Wolfram Cox, J.; Mann, L.; Samson, D. Benchmarking as a mixed metaphor: Disentangling assumptions of competition and collaboration. J. Manag. Stud. 1997, 34, 285–314. [Google Scholar] [CrossRef]
  22. Easterby-Smith, M.; Araujo, L.; Burgoyne, J. Organizational Learning and the Learning Organization: Developments in Theory and Practice; Sage: London, UK, 2011. [Google Scholar]
  23. Garratt, B.; Marsick, V.J.; Watkins, K.E.; Dixon, N.M.; Smith, P.A. The Learning Organization: And the Need for Directors Who Think; Gower: Aldershot, UK, 1987. [Google Scholar]
  24. Senge, P. The Fifth Discipline: The Art & Practice of the Learning Organization; Doubleday: New York, NY, USA, 2000. [Google Scholar]
  25. Garvin, D.A. Building a learning organization. Harv. Bus. Rev. 1993, 71, 78–91. [Google Scholar]
  26. Nevis, E.C.; DiBella, A.J.; Gould, J.M. Understanding organizations as learning systems. Sloan Manag. Rev. 1997, 36, 73–85. [Google Scholar]
  27. Pedler, M.; Burgoyne, J.G.; Boydell, T. The Learning Company: A Strategy for Sustainable Development; McGraw-Hill: London, UK, 1991; p. 243. [Google Scholar]
  28. McGill, M.E.; Slocum, J.W., Jr. Unlearning the organization. Organ. Dyn. 1993, 22, 67–79. [Google Scholar] [CrossRef]
  29. Nonaka, I. A dynamic theory of organizational knowledge creation. Organ. Sci. 1994, 5, 14–37. [Google Scholar] [CrossRef] [Green Version]
  30. Rebelo, T.M.; Gomes, A.D. Organizational learning and the learning organization: Reviewing evolution for prospecting the future. Learn. Organ. 2008, 15, 294–308. [Google Scholar] [CrossRef]
  31. Grieves, J. Why we should abandon the idea of the learning organization. Learn. Organ. 2008, 15, 463–473. [Google Scholar] [CrossRef]
  32. Cyert, R.M.; March, J.G. A Behavioral Theory of the Firm; Prentice-Hall: Englewood Cliffs, NJ, USA, 1963. [Google Scholar]
  33. Argyris, C.; Schön, D.A. Organizational Learning: A Theory of Action Perspective; Addison-Welsey: Boston, MA, USA, 1978. [Google Scholar]
  34. Shrivastava, P. A typology of organizational learning systems. J. Manag. Stud. 1983, 20, 7–28. [Google Scholar] [CrossRef]
  35. Fiol, C.M.; Lyles, M.A. Organizational learning. Acad. Manag. Rev. 1985, 10, 803–813. [Google Scholar] [CrossRef]
  36. Huber, G.P. Organizational learning: The contributing processes and the literatures. Organ. Sci. 1991, 2, 88–115. [Google Scholar] [CrossRef]
  37. Rashman, L.; Withers, E.; Hartley, J. Organizational learning and knowledge in public service organizations: A systematic review of the literature. Int. J. Manag. Rev. 2009, 11, 463–494. [Google Scholar] [CrossRef]
  38. Barette, J.; Lemyre, L.; Corneil, W.; Beauregard, N. Organizational learning facilitators in the Canadian public sector. Int. J. Public Adm. 2012, 35, 137–149. [Google Scholar] [CrossRef]
  39. Skinner, B.F. The Behavior of Organisms: An Experimental Analysis; D. Appleton-Century Company: New York, NY, USA, 1938. [Google Scholar]
  40. DeFillippi, R.; Ornstein, S. Psychological perspectives underlying theories of organizational learning. In The Blackwell Handbook of Organizational Learning and Knowledge Management; Easterby-Smith, M., Lyles, M.A., Eds.; Blackwell Publishing Ltd.: Malden, MA, USA, 2003; pp. 19–35. [Google Scholar]
  41. Elkjaer, B. In search of a social learning theory. In Organizational Learning and the Learning Organization: Developments in Theory and Practice; Easterby-Smith, M., Araujo, L., Burgoyne, J., Eds.; Sage: London, UK, 1999; pp. 75–91. [Google Scholar]
  42. Hawkins, P. Organizational learning: Taking stock and facing the challenge. Manag. Learn. 1994, 25, 71–82. [Google Scholar] [CrossRef]
  43. Hedberg, B. How organizations learn and unlearn. In Handbook of Organizational Design; Nystrom, P.C., Starbuck, W.H., Eds.; Oxford University Press: New York, NY, USA, 1981; pp. 3–27. [Google Scholar]
  44. Staes, P.; Thijs, N. Quality Management on the European Agenda; European Institute of Public Administration: Luxembourg, 2005; pp. 33–41. [Google Scholar]
  45. Dale, B.; Zairi, M.; Van der Wiele, A.; Williams, A. Quality is dead in Europe–long live excellence-true or false? Meas. Bus. Excell. 2000, 4, 4–10. [Google Scholar] [CrossRef]
  46. Balbastre, F.; Luzón, M.M. Self-assessment application and learning in organizations: A special reference to the ontological dimension. Total Qual. Manag. Bus. Excell. 2003, 14, 367–388. [Google Scholar] [CrossRef]
  47. CAF Resource Center. CAF 2013—Improving Public Organisations through Self-Assessment. 2013. Available online: https://ec.europa.eu/eurostat/ramon/statmanuals/files/CAF_2013.pdf (accessed on 1 October 2019).
  48. Bruder, K.; Gray, E. Public-sector benchmarking: A practical approach. Public Manag. 1994, 76, S9–S14. [Google Scholar]
  49. Bullivant, J. Benchmarking in the UK national health service. Int. J. Health Care Qual. Assur. 1996, 9, 9–14. [Google Scholar] [CrossRef]
  50. Howat, G.; Murray, D.; Crilley, G. Reducing measurement overload: Rationalizing performance measures for public aquatic centres in Australia. Manag. Leis. 2005, 10, 128–142. [Google Scholar] [CrossRef]
  51. Ammons, D. Municipal Benchmarks: Assessing Local Performance and Establishing Community Standards; Routledge: London, UK, 2012. [Google Scholar]
  52. Mugion, R.G.; Musella, F. Customer satisfaction and statistical techniques for the implementation of benchmarking in the public sector. Total Qual. Manag. Bus. Excell. 2013, 24, 619–640. [Google Scholar] [CrossRef]
  53. Huijben, M.; Geurtsen, A.; van Helden, J. Managing overhead in public sector organizations through benchmarking. Public Money Manag. 2014, 34, 27–34. [Google Scholar] [CrossRef]
  54. Rendon, R. Benchmarking contract management process maturity: A case study of the US Navy. Benchmarking Int. J. 2015, 22, 1481–1508. [Google Scholar] [CrossRef]
  55. Bowerman, M.; Francis, G.; Ball, A.; Fry, J. The evolution of benchmarking in UK local authorities. Benchmarking Int. J. 2002, 9, 424–449. [Google Scholar] [CrossRef]
  56. Karlöf, B.; Östblom, S. Benchmarking: A Signpost to Excellence in Quality and Productivity; John Wiley & Sons Inc.: Chichester, UK, 1993. [Google Scholar]
  57. Watson, G.H. Strategic Benchmarking: How to Rate Your Company’s Performance against The World’s Best; John Wiley & Sons Inc.: New York, NY, USA, 1993. [Google Scholar]
  58. Deming, E. Out of the Crisis: Quality, Productivity and Competitive Position; Massachusetts Institute of Technology: Cambridge, MA, USA, 1986. [Google Scholar]
  59. Fernandez, P.; McCarthy, I.P.; Rakotobe-Joel, T. An evolutionary approach to benchmarking. Benchmarking Int. J. 2001, 8, 281–305. [Google Scholar] [CrossRef]
  60. Auluck, R. Benchmarking: A tool for facilitating organizational learning? Public Adm. Dev. Int. J. Manag. Res. Pract. 2002, 22, 109–122. [Google Scholar] [CrossRef]
  61. American Productivity and Quality Center. What Is Best Practice? Available online: http://www.apqc.org (accessed on 30 March 2022).
  62. Boxwell, R. Benchmarking for Competitive Advantage; McGraw Hill: New York, NY, USA, 1994. [Google Scholar]
  63. Osborne, D.; Gaebler, T. Reinventing Government: How the Entrepreneurial Spirit is Transforming Government; Addison-Wesley: Boston, MA, USA, 1992. [Google Scholar]
  64. Kouzmin, A.; Löffler, E.; Klages, H.; Korac-Kakabadse, N. Benchmarking and performance measurement in public sectors: Towards learning for agency effectiveness. Int. J. Public Sect. Manag. 1999, 12, 121–144. [Google Scholar] [CrossRef]
  65. Braadbaart, O. Collaborative benchmarking, transparency and performance: Evidence from The Netherlands water supply industry. Benchmarking Int. J. 2007, 14, 677–692. [Google Scholar] [CrossRef]
  66. Buckmaster, N. Benchmarking as a learning tool in voluntary non-profit organizations: An exploratory study. Public Manag. Int. J. Res. Theory 1999, 1, 603–616. [Google Scholar] [CrossRef]
  67. Askim, J.; Johnsen, Å.; Christophersen, K.-A. Factors behind organizational learning from benchmarking: Experiences from Norwegian municipal benchmarking networks. J. Public Adm. Res. Theory 2008, 18, 297–320. [Google Scholar] [CrossRef]
  68. Johnstad, T.; Berger, S. Nordic Benchlearning: A Community of Practice between Six Clusters. In Proceedings of the Conference on Regional Development and Innovation Processes, Porvoo, Finland, 5–7 March 2008. [Google Scholar]
  69. Batlle-Montserrat, J.; Blat, J.; Abadal, E. Local e-government benchlearning: Impact analysis and applicability to smart cities benchmarking. Inf. Polity 2016, 21, 43–59. [Google Scholar] [CrossRef]
  70. Torbjorn, S. Regional and institutional innovation in a peripheral economy: An analysis of a pioneer industrial endeavour and benchlearning between Norway and Canada. In Proceedings of the Regional Development and Innovation Processes, Porvoo, Finland, 5–7 March 2008. [Google Scholar]
  71. DiBella, A.J.; Nevis, E.C.; Gould, J.M. Understanding organizational learning capability. J. Manag. Stud. 1996, 33, 361–379. [Google Scholar] [CrossRef]
  72. Goh, S. Toward a learning organization: The strategic building blocks. SAM Adv. Manag. J. 1998, 63, 15–22. [Google Scholar]
  73. Hult, G.T.M.; Ferrell, O. Global organizational learning capacity in purchasing: Construct and measurement. J. Bus. Res. 1997, 40, 97–111. [Google Scholar] [CrossRef]
  74. Popper, M.; Lipshitz, R. Organizational learning mechanisms and a structural and cultural approach to organizational learning. J. Appl. Behav. Sci. 1998, 34, 161–179. [Google Scholar] [CrossRef]
  75. Goh, S.; Richards, G. Benchmarking the learning capability of organizations. Eur. Manag. J. 1997, 15, 575–583. [Google Scholar] [CrossRef]
  76. Chiva, R.; Alegre, J.; Lapiedra, R. Measuring organisational learning capability among the workforce. Int. J. Manpow. 2007, 28, 224–242. [Google Scholar] [CrossRef]
  77. Camps, J.; Alegre, J.; Torres, F. Towards a methodology to assess organizational learning capability: A study among faculty members. Int. J. Manpow. 2011, 32, 687–703. [Google Scholar] [CrossRef]
  78. Kassim, N.A.; Shoid, M.S.M. Organizational learning capabilities and knowledge performance in universititeknologi Mara (UiTM) Library, Malaysia. World Appl. Sci. J. 2013, 21, 93–97. [Google Scholar]
  79. Jerez-Gomez, P.; Céspedes-Lorente, J.; Valle-Cabrera, R. Organizational learning capability: A proposal of measurement. J. Bus. Res. 2005, 58, 715–725. [Google Scholar] [CrossRef]
  80. Infrastractures and Project Authority. Benchmarking Capability Tool. Available online: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/888966/6.6408_IPA_Benchmarking_Capability_Tool_v7_web.pdf (accessed on 30 March 2022).
  81. Liu, Y.-D. Implementing and evaluating performance measurement initiative in public leisure facilities: An action research project. Syst. Pract. Action Res. 2009, 22, 15–30. [Google Scholar] [CrossRef]
  82. Hooper, P.; Greenall, A. Exploring the potential for environmental performance benchmarking in the airline sector. Benchmarking Int. J. 2005, 12, 151–165. [Google Scholar] [CrossRef]
  83. Nieva, V.; Sorra, J. Safety culture assessment: A tool for improving patient safety in healthcare organizations. BMJ Qual. Saf. 2003, 12, ii17–ii23. [Google Scholar] [CrossRef]
  84. Kurnia, S.; Rahim, M.M.; Samson, D.; Prakash, S. Sustainable Supply Chain Management Capability Maturity: Framework Development and Initial Evaluation. In Proceedings of the European Conference on Information Systems (ECIS), Tel Aviv, Israel, 9–11 June 2014. [Google Scholar]
  85. Acocella, I. The focus groups in social research: Advantages and disadvantages. Qual. Quant. 2012, 46, 1125–1136. [Google Scholar] [CrossRef]
  86. Morgan, D. Focus groups. Annu. Rev. Sociol. 1996, 22, 129–152. [Google Scholar] [CrossRef]
  87. Wang, R.; Wiesemes, R. Enabling and supporting remote classroom teaching observation: Live video conferencing uses in initial teacher education. Technol. Pedagog. Educ. 2012, 21, 351–360. [Google Scholar] [CrossRef]
  88. Bryman, A. Social Research Methods; Oxford University Press: New York, NY, USA, 2012. [Google Scholar]
  89. Hoskisson, R.E.; Hitt, M.A.; Johnson, R.A.; Moesel, D.D. Construct validity of an objective (entropy) categorical measure of diversification strategy. Strateg. Manag. J. 1993, 14, 215–235. [Google Scholar] [CrossRef]
  90. Freytag, P.; Hollensen, S. The process of benchmarking, benchlearning and benchaction. TQM Mag. 2001, 13, 25–34. [Google Scholar] [CrossRef]
  91. Noordhoek, M. Municipal Benchmarking: Organisational Learning and Network Performance in the Public Sector; Aston University: Birmingham, UK, 2013. [Google Scholar]
  92. Bridge-IT. Deliverable D.2 “Benchlearning Methodology and Data Gathering Template”, Greek Tax Agency Benchlearning and Evaluation Project. Gov3, for the Greek Information Society Observatory, Athens. Available online: https://www.yumpu.com/en/document/view/44486911/benchlearning-methodology-and-data-gathering-template (accessed on 30 March 2022).
  93. Ringle, C.; Wende, S.; Becker, J. SmartPLS 3. Available online: http://www.smartpls.com (accessed on 1 October 2021).
  94. Hair, J.; Hult, T.; Ringle, C.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  95. Hadi, S.; Baskaran, S. Examining sustainable business performance determinants in Malaysia upstream petroleum industry. J. Clean. Prod. 2021, 294, 126231. [Google Scholar] [CrossRef]
  96. Oyewobi, L.O.; Adedayo, O.F.; Olorunyomi, S.O.; Jimoh, R. Social media adoption and business performance: The mediating role of organizational learning capability (OLC). J. Facil. Manag. 2021, 19, 413–436. [Google Scholar] [CrossRef]
  97. Hair, J.; Matthews, L.; Matthews, R.; Sarstedt, M. PLS-SEM or CB-SEM: Updated guidelines on which method to use. Int. J. Multivar. Data Anal. 2017, 1, 107–123. [Google Scholar] [CrossRef]
  98. Lowry, P.B.; Gaskin, J. Partial least squares (PLS) structural equation modeling (SEM) for building and testing behavioral causal theory: When to choose it and how to use it. IEEE Trans. Prof. Commun. 2014, 57, 123–146. [Google Scholar] [CrossRef]
  99. Cenfetelli, R.; Bassellier, G. Interpretation of formative measurement in information systems research. MIS Q. 2009, 33, 689–707. [Google Scholar] [CrossRef]
  100. Chin, W.W. The partial least squares approach to structural equation modeling. In Modern Methods for Business Research; Marcoulides, A., Ed.; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1998; pp. 295–336. [Google Scholar]
  101. Mihail, D.M.; Kloutsiniotis, P.V. Modeling patient care quality: An empirical high-performance work system approach. Pers. Rev. 2016, 45, 1176–1199. [Google Scholar] [CrossRef]
  102. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef] [Green Version]
  103. Al-Ahbabi, S.; Singh, S.K.; Gaur, S.S.; Balasubramanian, S. A knowledge management framework for enhancing public sector performance. Int. J. Knowl. Manag. Stud. 2017, 8, 329–350. [Google Scholar] [CrossRef]
  104. Venkatraman, N. Strategic orientation of business enterprises: The construct, dimensionality, and measurement. Manag. Sci. 1989, 35, 942–962. [Google Scholar] [CrossRef]
Figure 1. Research methods and analysis.
Figure 1. Research methods and analysis.
Sustainability 15 01383 g001
Figure 2. The proposed model.
Figure 2. The proposed model.
Sustainability 15 01383 g002
Figure 3. The two-step approach model.
Figure 3. The two-step approach model.
Sustainability 15 01383 g003
Table 1. BMKC questions.
Table 1. BMKC questions.
Question 17There is a culture of continuous searching for new processes and practices and learning from others
Question 18The organization favors internal best-practice knowledge sharing
Question 19The organizational culture favors continuous improvement
Question 20Customer (internal and external) satisfaction is very important in all parts of the organization
Table 2. Convergent validity and reliability.
Table 2. Convergent validity and reliability.
LoadingsCronbach’s AlphaComposite ReliabilityAverage Variance Extracted (AVE)
BMK10.896
BMK20.927
BMK30.934
BMK40.833
BMK 0.9200.9430.807
KTI10.850
KTI20.771
KTI30.869
KTI40.771
KTI 0.8360.8890.667
MC20.823
MC30.834
MC40.937
MC50.723
MC 0.8580.9000.694
OE10.735
OE20.838
OE30.890
Table 3. Fornell-Larcker Criterion.
Table 3. Fornell-Larcker Criterion.
BMKCKTIMCOESP
BMKC0.898
KTI0.8130.817
MC0.8290.8210.833
OE0.8180.7730.7560.822
SP0.8160.7260.7080.7130.934
Table 4. Heterotrait-Monotrait Ratio (HTMT).
Table 4. Heterotrait-Monotrait Ratio (HTMT).
BMKCKTIMCOESP
BMKC
KTI0.850
MC0.8080.804
OE0.8020.8870.871
SP0.8760.8070.7880.784
Table 5. VIF values.
Table 5. VIF values.
VIF
BMK12.985
BMK24.190
BMK34.349
BMK42.226
KTI12.590
KTI22.432
KTI32.747
KTI42.370
MC22.718
MC31.993
MC44.265
MC51.634
OE11.789
OE22.510
OE32.743
OE41.837
SP14.341
SP24.650
SP32.478
Table 6. Hypotheses testing (Bootstrapping results).
Table 6. Hypotheses testing (Bootstrapping results).
Original
Sample (O)
Sample
Mean (M)
Standard Deviation (STDEV)T Statistics (|O/STDEV|)p Values
BMKC -> BLNC0.2720.2730.00928.9170.000
KTI -> OLC0.2840.2840.01224.6730.000
MC -> OLC0.2920.2930.01127.2540.000
OE -> OLC0.2740.2740.01027.6260.000
OLC -> BLNC0.7400.7400.00984.4520.000
SP -> OLC0.2620.2610.01026.8920.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kessopoulou, E.; Gotzamani, K.; Xanthopoulou, S.; Tsiotras, G. Conceptualizing and Validating a Model for Benchlearning Capability: Results from the Greek Public Sector. Sustainability 2023, 15, 1383. https://doi.org/10.3390/su15021383

AMA Style

Kessopoulou E, Gotzamani K, Xanthopoulou S, Tsiotras G. Conceptualizing and Validating a Model for Benchlearning Capability: Results from the Greek Public Sector. Sustainability. 2023; 15(2):1383. https://doi.org/10.3390/su15021383

Chicago/Turabian Style

Kessopoulou, Eftychia, Katerina Gotzamani, Styliani Xanthopoulou, and George Tsiotras. 2023. "Conceptualizing and Validating a Model for Benchlearning Capability: Results from the Greek Public Sector" Sustainability 15, no. 2: 1383. https://doi.org/10.3390/su15021383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop