Conceptualizing and Validating a Model for Benchlearning Capability: Results from the Greek Public Sector

: This paper aims to report on the development and assessment of a conceptual model for benchlearning capability, which facilitates sustainable performance improvement. Following an in-depth literature review, two main dimensions of benchlearning capability were identiﬁed. A focus group approach was used in order to establish the connection between these dimensions and the main construct (benchlearning capability). A questionnaire was designed and administered to 502 individuals from 74 organizations that used the Common Assessment Framework, and a total of 163 respondents replied. For the structural model assessment, the PLS-SEM technique was employed. Literature reveals that benchlearning encompasses both comparative evaluation and organizational learning mechanisms. Moreover, the focus group concluded that Organizational Learning Capability (OLC) and Benchmarking Capability (BMKC) are positively related to Benchlearning Capability (BLNC). The quantitative analysis showed that the factor OLC and BMKC are related positively and signiﬁcantly to BLNC. This paper is the ﬁrst attempt to approach the benchlearning capability construct and to validate its model. It is also a ﬁrst attempt towards providing empirical evidence that could help public managers understand the BLNC concept and formulate accordingly the appropriate strategy for improving the benchlearning capability and therefore achieving sustainable performance in their organizations.


Introduction
Public organizations which are knowledge-based and knowledge-intensive explore efficient management methods and sustainable development tools to perform successfully in a dynamic organizational environment [1].The introduction of quality in the public sector came as a response to the problems of inefficiency, wastefulness, and citizen-detached services that characterized public organizations [2], as well as to the increasing public pressure challenges [3,4] and requirements of the globalized environment [5].Excellence models, customer satisfaction measurement and/or self-assessment tools that were used in the private sector, as well as quality management concepts and principles, were regarded as appealing for adoption in the public sector in order to facilitate resolving its malfunctions.Their implementation, though, brought inconsistent findings and therefore raised a note of concern about their alleged benefits in the public sector [6][7][8].As a result, criticisms that call for a more critical adaptation of these tools in the public sector context increased [9][10][11], possibly due to the differences between the public and private sectors [12,13].
Addressing this call, the European Institute of Public Administration developed a total quality management (TQM) tool for the public sector, known as the Common Assessment Framework (CAF), which draws on the European Foundation of Quality Management (EFQM) model [14].In this case, the use of CAF by public organizations could be claimed to transcend the level of imitation of private sector tools since it is an adapted tool for the public sector.The CAF employs a structured framework for self-assessing the ability of public organizations to embrace TQM and undertake benchlearning activities in order to achieve leapfrog improvements in quality and performance [15].Specifically, the latest version of CAF 2020 focuses on digitization, agility, sustainability, innovation, collaboration and diversity.Additionally, organizational learning capability is an important element for achieving sustainable organizational performance in a rapidly changing organizational environment [1].Thus, it could also be regarded as an organizational learning tool that provides a structured and systematic learning framework through which an organization introspects on its functions, learns from the experiences of other public entities, and prepares action plans for sustainable performance improvement.
Benchlearning is a further conception of the already established business term of benchmarking, which means " . . . the search for industry best practices that lead to superior performance" [16] (p.12).Benchlearning refers to the sharing and comparing of experiences and knowledge inside and/or outside organizations through a process of collaborative learning for the purpose of identifying and establishing best practices [17].It is a methodology that includes social aspects of organizational learning since organizations or parts of them are called to cooperate with others after undertaking self-assessment activities of their performance.
Up until now, there has been a gap in the analysis of this theoretical construct, limiting the ability of public managers to exploit its full potential.Extant research in the public sector approaches the concept of benchlearning from different viewpoints, drawing mainly on existing benchmarking implementations.According to Ammons and Roenigk [18], distinct types of these implementations are the comparison of performance statistics, visioning initiatives and best practice benchmarking.Best practice benchmarking seems to be closer to the concept of benchlearning.It could be found informally when public organizations adopt a similar approach to best performers in hopes of achieving improved results or formally when they follow a structured benchmarking approach to learn from others [18].In the first choice, a behavioural benchlearning approach of mimetic learning from others is highlighted, whereas, in the latter option, a more cognitive benchlearning viewpoint takes place.Furthermore, other researchers consider benchlearning from a social approach [19][20][21], where benchlearning is positioned as a process of learning with others that share mutual problems.
Although research of existing public sector applications in best practice benchmarking started to shed light on the organizational learning mechanism of benchlearning [18], there are still certain aspects that need further analysis.The added value of this research is to identify the factors of the benchlearning construct and address its multidimensionality, as well as transcend the limits of overemphasizing the "marking" side of this methodology, which mainly aims to compare performance data and information rather than address any knowledge issues.The idea that organizational learning takes place when undertaking best practice benchmarking in the public sector comes up against the lack of research for a conceptual model that could facilitate public managers and researchers in identifying the factors that synthesize the organizational benchlearning construct and additionally in developing potential interventions.Moreover, there is a need to consider the multidimensionality of the benchlearning construct since the organizational learning aspect is discussed in the literature in behavioral, cognitive and social terms [22].Therefore, the purpose of this research is to develop and validate a conceptual model that respects the multidimensional nature of benchlearning.
The paper proceeds as follows.We first discuss the concepts of organizational learning and learning organization by emphasizing the theory and practice of the process of learning.Then we describe the context through which public organizations in Greece get acquainted with benchlearning.We continue with approaching the benchlearning construct, highlighting its different dimensions, and with the help of a focus group, mapping its underlying relationships with learning and benchmarking.Then, the conceptual model is developed and assessed and finally, we discuss the results and implications of the study.

The Learning Organization and Organizational Learning Approaches
The idea of the learning organization (LO) is relatively new, attracting the attention of practitioners primarily, with Garratt [23] being one of the earliest contributors.The notion, though, is practically established with the work of Senge [24] (p.3), who defines it as "an organization where people continually expand their capacity to create results they truly desire, where new and expansive patterns of thinking nurtured, where collective aspiration is set free, and where people are continually learning how to learn together".Many authors [25][26][27][28][29] evaluated the learning differences between various LOs, in an attempt to model a learning organization by assigning characteristics (skills and abilities) that it should possess.Rebelo and Gomes [30] discuss the limited practical importance of such normative attempts, whereas Grieves [31] (p.470) debates whether searching for a specific agent's attributes is of any worth since different individuals share different interests.However, they all agree that if an organization is a learning organization or not would imply that it has reached or hasnot reached the ideal.Therefore, there is a paradigmatic shift in the literature from the content of the learning organization to the process of organizational learning that could lead organizations to the ideal.
Cyert and March [32] first claimed that an organization could learn independently from its members.Since then, considerable research has taken place on the study of the learning processes and capabilities in an organizational context [33][34][35][36].Literature regards organizational learning and its outcome organizational knowledge as process and content, respectively, encompassing cognitive, behavioral, and social aspects [22,37,38].These three perspectives explain the complexity of the learning process and content.The behavioral dimension emphasizes the importance of experiencing the consequences of a specific behavior in order to learn [39].The cognitive perspective describes an entity's ability to articulate and codify knowledge, to "find and fix errors" by reflecting on its own "thinking, reasoning and memory" [40].On the other hand, the social approach enriches organizational learning with the integration of both cognitive and behavioral aspects meanwhile interpreting its relativistic (non-individual) character in the learning process [41].This means that due to the interaction between organizations, there is a potential advancement to collective learning (sharing better practices).Hawkins [42] refers to the complexity of shifting the learning interest from individuals to the organization, and Hedberg [43] describes their synergistic nature by suggesting the fallacy of equating organizational learning with the cumulative learning result of organizational members.Therefore, organizations seem to share different potentials for learning that is considered complex and dynamic concept, simple part or core element of organizational processes.

The Common Assessment Framework, Self-Assessment, Organizational Learning and "Benchlearning"
The CAF aims to provide a "common language" for assessment and convergent improvements among European public administrations [44].By undertaking a self-assessment exercise, an organization can acquire insights into its strengths and weaknesses and set the background for further improvements [45].One way to proceed with organizational improvements after a self-assessment exercise might be through the process of learning.Balbastre and Luzón [46] studied the links between self-assessment and learning and underlined the positive effects of the former on the latter under specific conditions.Thus, self-assessment quality tools could facilitate organizational change and improvement through organizational learning.
The 2013 update of the CAF model [47] underlines the importance of organizational learning by regarding benchlearning as one of its core purposes.The CAF examines the organizational environment by focusing on nine criteria forming two broader categories: enablers and results.Its basic idea draws on the logic of means-ends relationships, achieving excellence (for people, citizens, society and performance) through a set of facilitators (leadership, people, processes, strategy and planning, partnerships and resources).Usually, an organization's employees undertake the assessment and get involved in the decisionmaking process of change through the development, implementation and review of action plans.In this case, employees are exposed to learning "how to improve through sharing knowledge, information and sometimes resources" [14].

Benchmarking, Organizational Learning and Benchlearning
Literature underlines a rational approach taking place in benchmarking practice, which is used in order to control the activity [48,49], as well as a rather technical (engineering) perspective that overemphasizes measurement [50][51][52][53][54].According to Karlöf et al. [17], the first step in the benchmarking process is a survey, followed by a comparison between organizations, and then organizations could proceed with understanding the reasons for performance gaps and develop improvements.This "understanding" process step includes the "lessons learnt", which means that the knowledge acquired is distinct from simple data and information comparison.However, not all organizations undertaking benchmarking exercises intend to learn.Especially in the public sector, according to Bowerman et al. [55], benchmarking could be implemented defensively in the sense of avoiding criticisms rather than learning from or with others.
However, benchmarking and benchlearning could be linked to organizational learning activities [15] (p.83).Benchmarking literature, especially the one focusing on the public sector, provides examples of behavioral, cognitive and social aspects of organizational learning.Early publications embrace the reasons that private sector organizations had to "search for industry's best practices that lead to superior performance" [16] (p.10) in order to imitate the successful behavior of others [56].Therefore, benchmarking was considered a process of identifying strengths, weaknesses and standards through the measurement and comparison of products, services and processes with best performers [16,57].Under the public sector benchmarking umbrella, this was translated into an efficiency rigor mandating the use of metrics, yardsticks and league tables as a response to government requirements or just for defensive reasons [55].This approach could imply a behavioral learning thesis of best practice acknowledgment and adoption, where public organizations conform to isomorphic pressures rather than search for improvement and excellence.
This trend soon declined since no organization is similar to another [58], and the results of benchmarking exercises were highly debatable, especially when the complexities of the public sector were to be considered [21,59,60].Therefore, it soon became evident the need for codifying information and knowledge of best performers, which actually means putting learning into a more structured framework and managing knowledge so as to adapt best practices to each context.Publications started to shed more light on the cognitive aspects of learning, emerging during a benchmarking process.The American Productivity and Quality Center [61] regarded benchmarking as "the process of continuously comparing and measuring . . . . to gain information that will help the organization take action to improve its performance" and Boxwell [62] (p.17) as a goal-setting activity based on "objective, external standards and learning from others".In the public sector, researchers pointed out the identification, sharing and adaptation of "best practices" [38,55,[63][64][65] among public organizations, as well as an approach that favors collaboration [65,66] and peer-to-peer learning [67].Therefore, the acknowledgment of social elements, such as dialogue, knowledge sharing, collaboration and interaction in the field of benchmarking research, is highlighted.
Karlöf and Östblom [56] (p.182) attempted to emphasize the learning and improvement aspects in the measurement tool of benchmarking and devised the term "benchlearning".Benchlearning is a more human-centric management approach that corresponds to complex economic conditions [17].Such conditions, as well as great heterogeneity, characterize the public sector, which needs a solution for the difficulties of managing best practices.Benchlearning is a process of comparative evaluation as well as comparative learning, a tool "designed to develop the ability of companies and individuals to use stored knowledge . . ." as well as a process to reach the ideal of the learning organization [17] (p.96), which is meanwhile an ideal for public services [60].Therefore, benchlearning institutionalizes the theory and practice of the "learning" process through the development of competencies and skills [19] so as to accumulate the experiences of other organizations.This implies that benchlearning incorporates both the content of the ideal model (the best performer/benchlearner) as well as that of the process (benchlearning) towards achieving it.

The Capability of Organizations for Benchlearning
Like every organizational phenomenon, benchlearning needs some form of conceptualization.A traditional way in literature to approach the benchlearning construct focuses on the process itself or the process outcomes but without providing a validated framework of the facilitating factors for benchlearning, or else, one that models an organization's propensity to bench learn.For example, Johnstad and Berger [68] describe how a benchlearning project was organized, and Karlöf et al. [17] describe it as a combination of business development and organizational learning.Batlle-Montserrat et al. [69] report evidence that benchlearning facilitated smart cities to identify good practices, learn from them and improve some of their e-services, and Torbjorn [70] reports that an international benchlearning program between two countries provided new perspectives on competencies and industrial development capabilities.Benchlearning, though could happen besides the accomplishment of its observable outcomes, but not by neglecting the capability of organizations to do so [17].By neglecting the facilitating factors, we limit our understanding of the impact that an organization's potential could have on the benchlearning process.Therefore, the study of the facilitating factors for benchlearning becomes an important aspect of research and could start with approaching the two major dimensions of this construct: organizational learning and benchmarking.
With reference to organizational learning, Garvin [25] challenges the idea of building a learning organization in the short term and proposes the development of an environment that facilitates learning.Moreover, DiBella et al. [71] stress the importance of preferences, attitudes and values to explore the organizational capability to retrieve, transfer and use knowledge.At the same time, the authors emphasize the impact of an organization's potential on the learning process and various researchers [25,[72][73][74][75] use the term organizational learning capability (OLC) as a concept that reflects all those organizational characteristics, namely culture, structure, processes and practices that either foster or do not hinder learning.Therefore, OLC focuses on an organization's potential "abilities" to learn in order to become a learning organization.
Among significant studies concerning the factors of OLC [38,[76][77][78], the work of Jerez-Gomez et al. [79] gained wide acceptance.They reviewed the literature on organizational learning critically and developed a new model that communicates cognitive, behavioral and social characteristics of organizational learning and is comprised of the following capabilities:

•
The capability of management to foster a learning culture (managerial commitment-MC)-includes long-term organizational learning and managerial efforts to instill the importance of learning among its members as well as to discard perceptions that delay organizational success.

•
The capability of the organization's members to influence each other under a "shared vision, mission and practices" (systems perspective-SP)-each member understands the collective goals and his/her position and acts respectively in the organizational network of implicit and explicit relationships, enabling the transition from individual to collective learning.

•
The capability of the organization to promote "generative learning" (openness and experimentation-OE)-addresses an interactive experience with the environment and encompasses acceptance of new values, perceptions, attitudes and practices (openness) as well as risk tolerance and creativity.

•
The capability of the organization to create enabling processes and structures of "knowledge transfer and integration" (KTI)-this means that by incorporating social elements (communication, dialogue and debate) and technological facilitators (information systems), the organizational members interact in order to develop "organizational memory" [36] and collective knowledge.
With respect to benchmarking, the CAF tool that is widely used in the European public sector is a framework that assesses important organizational aspects in order to diagnose the potential of an organization to compare its performance with others [47].Moreover, the Benchmarking Capability Tool, developed by the Infrastructures and Project Authority [80] in the UK, provides a useful way that allows organizations to identify and score their capability and maturity for benchmarking by assessing benchmarking strategy, people-culture-process, data and systems, insights and analysis.Various researchers underlined the importance of an organizational environment that facilitates benchmarking exercises [16,17,56,57], and the term benchmarking capability is implied in various studies [55,[81][82][83] showing the potential of an organization for benchmarking.Kurnia et al. [84] (p. 6) approach benchmarking capability in the context of sustainability and define it as "the ability of an organisation to compare the sustainability performance across various units (internal) and supply chain members (external)".
Following the literature review, the aspects of benchmarking capability that are of great importance when undertaking benchmarking activities are: (a) the capability of the organization's members to pursue the satisfaction of (internal and external) customers' needs and requirements [16,56,57], (b) the capability of the organization to foster continuous improvement [16,[55][56][57]61,62,64,65], (c) the capability of the organization to promote internal best practice comparison [16,57] and (d) the capability of the organization to promote best practice comparison with the external environment [16,[55][56][57]61,62,64,65].Therefore, the capability of the organization for the benchmarking process could be defined as the capability to enable evaluation and management of best practices in order to continuously improve its performance meanwhile satisfying customers'/citizens' needs.

Hypotheses Development
Drawing on the abovementioned analysis, the two factors identified for the benchlearning construct are organizational learning capability and benchmarking capability.Since there is no previous study, a focus group was used in order to relate the factors of the construct.The focus group comprised five (5) experts, who were selected because of their experience in applying the CAF and the benchlearning process.Experts were contacted by email and received the interview protocol and an informed consent form, which reassured their confidentiality and anonymity.The focus group interview was conducted online on 10 December 2019, with the contribution of a facilitator and an observer from the research team.The focus group method was employed at this stage because, according to Acoccela [85], it could provide a deeper understanding of a new topic, such as benchlearning capability.Moreover, it facilitated the understanding, interpretation, and analysis of members' perspectives more quickly, easily, and cost-effectively as Morgan [86] and Wang and Wiesemes [87] suggest.
Presumably, the research hypotheses developed are the following:

Methodology
This study seeks to measure the benchlearning capability among public organizations in Greece that have undertaken a CAF exercise.CAF self-assessed organizations are chosen not out of convenience but because, in Greece, public organizations are familiar with benchlearning through the use of CAF.To address this purpose, a questionnaire was developed, employing the comparison and learning mechanisms that emerged from the literature review.Figure 1 below describes the research methods used and the analysis implemented for this study.

Developing the Instrument
Drawing on the literature review and the focus group findings, the benchlearning capability construct emerges from benchmarking and learning.Pre-eminent items of benchmarking propensity, according to the literature [16,17,56,57], are the following: (a) customer satisfaction perspective in all parts of the organization, (b) continuous improvementled organizational culture, (c) inter-organizational best practice knowledge-sharing, and (d) continuous searching for new processes and practices outside the organization and learning from others.Moreover, organizational learning capability is comprised of the capabilities relevant to management commitment, systems perspective, openness and experimentation, knowledge transfer and integration [79].Like OLC, the Benchlearning Capability (BLC) could describe organizational characteristics that foster benchlearning.The final "benchlearning capability" construct consists of 20 questions and assesses management commitment (questions 1-5), systems perspective (questions 6-8), openness and experimentation (questions 9-12), knowledge transfer and integration (questions [13][14][15][16] and benchmarking capability (questions [17][18][19][20].The questions concerning benchmarking capability are the following (Table 1): The instrument was applied using a 5-point Likert scale ranging from 1 to 5, where 1 represents "strongly disagree" and 5 "strongly agree".Further pilot-testing of the instrument was conducted by interviewing 5 public sector employees to ensure the accuracy of translating the instrument into the Greek language in order to be comprehensible and adequately reflect the aspects measured.With reference to the profile of CAF users (first part of the questionnaire), which include 10,81% Ministries, 9.455% Decentralized Administrations, 9.455% Regions, 47.30% Municipalities, 6.76% Independent Authorities and 16.22% Hospitals, we observed that: (a) 58.3% declared the partial implementation of CAF, whereas 41,7% implemented CAF in all parts of the organization, (b)70.8%used the simple marking method compared to 29.2% of the analytic one, (c) most of the respondents (67%) employ 10 to 50 individuals, (d) 71% of them applied CAF only once, (e) for most of them a 4-year period or more passed since their last implementation, and (f) most of them have also appliedthe "management by objectives" improvement tool.Moreover, the role of the respondents in the CAF implementation teams is as follows: 42% are simple team members, 29% are team facilitators, and 29% are team leaders.

Data Analysis
The first step is the examination of content and face validity.Every generated instrument commits to an appropriate procedure of development and to be representative of the concept in question so as to claim content validity.The criteria, though to evaluate the latter, are essentially intuitive, yet missing an objective quantification [88,89].In our case, the sufficiency of content validity is grounded on previous theoretical and empirical research [17,19,57,[90][91][92], as well as discussion with experts on a focus group and on the pre-test procedure of the BLNC model.
The second step is the estimation of construct validity.For this reason, we used the "Smart PLS 3.2" software for the assessment of measurement and structural model by means of partial least squares (PLS) structural equation modeling (SEM) [93].PLS-SEM gained increased popularity over the years in social sciences [94] and recently in OLC research [95,96].We selected the PLS-SEM because it has several advantages when compared to the traditional covariance-based SEM techniques.For example, there are no distributional assumptions of normality, while it can be used to analyze data from small samples.In addition, PLS-SEM incorporates both formative and reflective constructs and hierarchical component models (HCMs).HCMs give us a chance to reduce the number of relationships in the structural model, making the PLS path model more parsimonious and easier to grasp [94].Drawing on Hair et al. [97], our conceptual model (path model) connects variables and constructs based on theory and logic.BLNC was operationalized as a "reflective-formative" higher-order component.The hierarchical component measurement model was created by using the "repeated indicators approach" combined with the "two-step approach" [94,98].Specifically, BLNC consisted of the MC, SP, OE, KTI, and BMKC.The reflective-formative HCM and the proposed model are depicted in Figure 2.For reflective indicators, convergent validity and reliability were estimated with the use of AVE, Cronbach α and composite reliability (CR) [94].For discriminant validity, we used the "Fornell-Larcker" and the "Heterotrait-Monotrait ratio" (HTMT < 0.85 or 0.9) criteria.For the formative indicator (BLNC), we examined the "multicollinearity" by the "Variance Inflation Factors" (VIF) [99] (VIF < 3.33 or 5.0).
Furthermore, we used the R-square of the dependent variable [100] and the Stone-Geisser Q-square test for the predictive relevance [94] of the quality of the struc- For the formative indicator (BLNC), we examined the "multicollinearity" by the "Variance Inflation Factors" (VIF) [99] (VIF < 3.33 or 5.0).Furthermore, we used the R-square of the dependent variable [100] and the Stone-Geisser Q-square test for the predictive relevance [94] of the quality of the structural model.For the Stone-Geisser Q-square test, two separate analyses with 7 and 25 omission distances were undertaken (blindfolding technique in SmartPLS) to test the stability of the findings (Q-squares > 0).Aligning with [101], we chose not to include the goodness-of-fit (GoF) as a criterion for PLS-SEM because it is believed that it is not able to separate valid models from invalid ones, while it is not applicable to formative measurement models [94,102].
Finally, the bootstrapping procedure was applied (500 randomly drawn samples) for the hypotheses testing.

The Proposed Model with Loadings
Figure 2 below depicts the proposed model with the reflective and formative constructs and loadings.As observed, all loadings are over the threshold value (0.50).

Construct Validity and Reliability of the Measurement Model
Table 2 shows that all factor loadings, AVE and CR scores were at the acceptable level (factor loadings > 0.5, Cronbach α > 0.7, AVE > 0.5, CR > 0.7).Moreover, discriminant validity was achieved, as Fornell-Larcker (Table 3) and HTMT criteria (Table 4) were satisfied.Thus, we may conclude that construct validity was achieved.

Structural Model Assessment
The result of the VIF estimation is acceptable since VIF values are below the threshold value of 5 (Table 5).In our case, the R 2 value for the BLNC construct was adequate.Regarding the separate analyses with 7 and 25 omission distances (blindfolding technique in SmartPLS), the values were stable for both omission distances, and all of the Q 2 was greater than zero (0.983 and 0.998, respectively).Thus, there is evidence that the model was stable, and the predictive relevance requirement was satisfied.

Structural Model Analysis
For the structural model analysis, the bootstrapping procedure was applied (500 randomly drawn samples) (Table 6 and Figure 3).Table 6 presents that BMK, KTI, MC, OE, and SP are positively and significantly related to BLN.

Discussion and Conclusions
The study of benchmarking in the public sector has become an important field of research over the last few years, providing useful insights on how public organizations could assess their performance against one of the best performers in order to save money, time and resources and learn to make leapfrog and sustainable performance improvements meanwhile providing qualitative services.However, the link between organizational learning, benchmarking, and benchlearning come up against the lack of research that should have been carried out for validating a conceptual framework of the facilitating factors for benchlearning or else one that models an organization's propensity to benchlearn.Benchlearning is a new area of research, and the benchlearning capability construct is not very well-defined.Prior to this work, all research considered benchlearning factors separately, and little research has been developed on the topic.Therefore, this study represents an initial assessment and validation of the benchlearning capability model in public organizations.
We chose the CAF self-assessed public organizations since, in Greece, it is the only group of organizations that is familiar with benchlearning.The CAF model provides a structured method for diagnosing strengths and weaknesses that become an input in

Discussion and Conclusions
The study of benchmarking in the public sector has become an important field of research over the last few years, providing useful insights on how public organizations could assess their performance against one of the best performers in order to save money, time and resources and learn to make leapfrog and sustainable performance improvements meanwhile providing qualitative services.However, the link between organizational learning, benchmarking, and benchlearning come up against the lack of research that should have been carried out for validating a conceptual framework of the facilitating factors for benchlearning or else one that models an organization's propensity to benchlearn.Benchlearning is a new area of research, and the benchlearning capability construct is not very well-defined.Prior to this work, all research considered benchlearning factors separately, and little research has been developed on the topic.Therefore, this study represents an initial assessment and validation of the benchlearning capability model in public organizations.
We chose the CAF self-assessed public organizations since, in Greece, it is the only group of organizations that is familiar with benchlearning.The CAF model provides a structured method for diagnosing strengths and weaknesses that become an input in benchlearning exercises.The benchlearning capability addresses the capability of organizations to make improvements that mirror organizational needs and fulfill their potential by uncovering the importance of learning and comparing.The benchlearning capability construct incorporates various approaches encompassing cognitive, behavioral and social characteristics of organizational learning as well as the comparability and relativity elements of benchmarking.Therefore, in order to assess the benchlearning capability model, we used the construct of [79], which approaches the OLC, and enriched it with benchmarking elements.
In order to validate the model, we examined the content and face validity, drawing on previous theoretical and empirical research, the focus group findings and on the pre-testing results of the pilot questionnaire.Moreover, we studied the construct's validity using PLS-SEM due to its increased popularity and advantages compared to other traditional SEM techniques.Additionally, discriminant validity was achieved since the Fornell-Larcker and HTMT criteria were satisfied.As far as the quality of the structural model is concerned, we used the R-square and the Stone-Geisser test to evaluate it.
Regarding the research implications of this study, this research attempts to fill the gap in the literature by conceptualizing and validating a BLNC construct under the public sector context, which will be useful for conducting a further investigation.Since the process of conceptualization and validation is at the epicenter of theory building [103,104], this research contributes towards the theoretical advancement of organizational learning in general [103].Additionally, the research results are useful for public sector executives providing a BLNC tool that can: (i) diagnose the organization's strengths and weaknesses and (ii) facilitate the formulation of an appropriate strategy and action plans for sustainable performance improvement.
There are though certain limitations associated with the empirical research part.First of all, the questionnaire captured employees' perceptions.This leads to results that portray employees' views rather than those of the organizations involved.We minimized this drawback by pre-testing the questionnaire, but personal perceptions still remain a problem in both reliability and validity analyses.Moreover, the items used for the scale development were adapted to our case, and this means that we could not benefit from the validation made by Jerez-Gomez et al. [79].Additionally, the study is undertaken in a specific context, which, besides minimizing external influences, limits though external validity.Expanding the borders of the study outside the Greek public domain would allow for more generalizable results and conclusions.In addition, a broader sample of participants could give a broader view of the benchlearning capability levels in the Greek public sector.Therefore, further research could be done in other contexts and with larger sample sizes in order to establish the general validity and reliability of the benchlearning scale.

Figure 1 .
Figure 1.Research methods and analysis.
Since 2003 the Department of Quality and Efficiency of the Hellenic Ministry of Interior, Public Administration and Decentralization institutionalized the voluntary implementation of the CAF model among public organizations.At the time of the research, the population of CAF users in the Greek public administration amounted to 74 public organizations (users) and 502 individuals (members of CAF implementation teams).A total of 163 respondents (2-3 individuals from each organization) replied from January 2021 to May 2021, giving a response rate of 32.47%.

Figure 3 .
Figure 3.The two-step approach model.

Figure 3 .
Figure 3.The two-step approach model.

Table 2 .
Convergent validity and reliability.