You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

21 August 2022

An Organizational and Governance Model to Support Mass Collaborative Learning Initiatives

,
,
and
School of Science and Technology and Center of Technology and Systems (CTS-Uninova), NOVA University of Lisbon, 2829-516 Monte de Caparica, Portugal
*
Author to whom correspondence should be addressed.

Abstract

Mass collaboration can bring about major transformative changes in the way people can work collectively. This emerging paradigm promises significant economic and social benefits and enhanced efficiency across a range of sectors, including learning and education. Accordingly, this article introduces, demonstrates in use, and evaluates an organizational and governance model designed to provide guidance and execution support for the implementation and operation of mass collaborative learning initiatives. The design science research process is adopted to guide the design and development of the proposed model. The model stands on three streams of work, addressing key aspects and elements that have a supporting influence on community learning: (i) identify the positive and negative factors in existing and active examples of mass collaboration; (ii) adopt contributions of collaborative networks in terms of structural and behavioral aspects; and (iii) establish adequate learning assessment indicators and metrics. The model is used for a case study in which vocational education and training meet the needs of collaborative education–enterprise approaches. Initially, the validation of the model is verified by the partners and stakeholders of a particular project in the area of education–enterprises relations to ensure that it is sufficiently appropriate for applications in a digital platform developed by such projects. The three first steps of (the proposed) applicability evaluation (adequacy, feasibility, and effectiveness) are then performed. The positive results gained from model validation and its applicability evaluation in this project indicate that not only is the model fairly adequate, feasible, and effective for applications in the developed digital platform but also that it has a high potential for utilization in supporting and directing the creation, implementation, and operation of mass collaborative learning initiatives. Although the validation was carried out in the context of a single project, in fact, it was based on a large “focus group” of experts involved in this international initiative, which is in accordance with the Design Science Research method. Thus, this article reflects a kind of applied research of a socio-technical nature, aiming to find guidelines and practical solutions to the specific issues, problems, and concerns of mass collaborative learning initiatives.

1. Introduction

Recent trends in the contexts of information and communication technologies (ICTs), collaborative networks (CNs), and community learning (CL) have paved the way for fostering a special form of networked community, known as mass collaborative learning community, which has the potential to change the future of working life and education. The fact is that such a constructive movement has provided the possibility for plenty of scattered but interested people to participate in mass collaborative initiatives and harness their potential joint power, aiming to deal with common/complex problems that cannot be solved individually. Furthermore, this evolving phenomenon opens new doors for the public to participate in a form of vocational and informal learning practices outside the traditional education system toward promoting their knowledge, experiences, and competencies; thus, it is an effective method for lifelong learning. The growing number of mass collaborative projects is now reshaping the boundaries of social and collective actions, both locally and globally. The evolution of mass collaboration and its application to different domains, particularly in education and learning, is enabling an unlimited number of learners to develop robust hubs of resources and competencies, assisting in the search for a wide variety of potential solutions for learning challenges [1,2].
Given the above and under the general umbrella of CNs [2], mass collaborative learning (MCL) occurs “when a large number of distributed and self-directed contributors share their partial knowledge, information, data, and experiences with each other (typically via ICT platforms) in order to learn something new. In this collective action, knowledge is jointly and continually created, shared, and developed which empowers participants with the capability to increase their chances of success” [1]. In MCL, plenty of enthusiastic and autonomous learners with different minds and backgrounds come together and attempt to establish new methods and develop different scenarios for virtual collective learning that are not common in traditional methods of education. In other words, people of different nations, religions, ethnicities, and socioeconomic statuses, regardless of age and gender, voluntarily join a learning community in which they can actively contribute to lifelong learning at different levels [3]. In sum, the main specific features of MCL that should be underlined include the number of participants (mass), communication channels (digital tools), the process of interaction (collaboration), sense of being a community member (spirit), materials for the transaction (knowledge, information, and data), and common goals (learning).
Contrary to traditional learning systems in which students are typically gathered under a roof at a specific time and specific place, in which the classroom management approach and the dominant teaching style are teacher-driven, in the MCL approach, there is a shift towards online, non-hierarchical, user-driven, and innovative learning environments [1,4]. According to [1], the MCL community provides an opportunity for learners to align themselves around a shared vision that leads to creating a sense of connectedness around a common goal. Indeed, a well-developed MCL community (which represents a dynamic and democratic system) can nurture a culture of knowledge creation and exchange [2]. Additionally, some prior studies [5,6] have provided evidence that there are several associated characteristics of any MCL community that include but are not limited to the following:
  • It allows the interaction of broad groups of heterogeneous people (who might be dispersed through time and space) in diverse ways, and they can reap the power of collaboration.
  • It connects the systems, organizations, and people that are keen to work and learn across boundaries.
  • It leads to public engagement in a casual and research study toward learning favorite subjects.
  • It enables learners to share knowledge and experiences and learn from each other, thereby promoting their capabilities in obtaining rapid yet significant progress.
  • It facilitates fast cycle learning toward impact at scale.
  • It increases global interactions with experts and peers.
  • It makes the process of learning relatively creative, cost-effective, and flexible.
  • It provides an easily accessible public digital data repository for all its members.
It should be also added that the effective use of ICT tools in MCL communities does not only make learning more interactive and easier but also enhances the modes of communication. Despite the tremendous progress in this area, notable achievements, and positive results that MCL has obtained over the years, it still faces a set of challenges [1,3,6]:
  • There is insufficient evidence about the successful application of MCL in various fields.
  • The concept, organizational structure, and associated mechanism of MCL are still evolving.
  • The practice of mass collaboration in the learning community is not clearly formalized.
  • There are some ambiguities about the key strategies for stimulating people to join the community and to keep them motivated in providing contributions.
As a result, this promising and complementary approach to learning still appears similar to a brand-new way of looking at a study that has not yet been revealed to everyone around the world.
On top of that, the authors believe that neither the main components, dimensions, and features of MCL have been explicitly identified and explained, nor they have been all adequately assessed. Hence, the attention in this study is given to dealing with these specific issues by considering the fact that the MCL is still evolving in the education context and there are several different views and controversial assumptions about this subject. For instance, different researchers have different (but nearly close) understandings of the concept. The fundamental issues such as the principles, boundaries, formation, processes, and development of MCL communities are still vague. More specifically, it is extremely complicated for developers to apply MCL to different fields successfully and effectively. Certainly, every field or environment has its own demands and circumstances that need to be taken into account in advance [4]. All gathered evidence shows that this area of knowledge and research is still in its infancy but is growing up gradually and surely. Therefore, it requires in-depth reviews and further research, inquiries, analysis, and contributions to provide better clarifications.
Given this context, the motives for conducting this study are the following:
  • Filling part of the gap mentioned above;
  • Identifying the influential factors that support the creation, operation, and development of MCL initiatives;
  • Gaining some insights into the main features of an MCL community.
Therefore, the key research question that emerges is as follows:
What could be an effective way of supporting community learning through mass collaboration?
In that account, the main contribution of this study is to propose an organizational and governance model for MCL initiatives (OGM-MCL) by relying on a design science research method that serves to clarify how community learning can be supported by MC.
The proposed hypothesis to address this research question is as follows:
Community learning can be effectively supported through mass collaboration if three streams of work are appropriately rooted in the foundation of a community: these include (I) identifying the positive and negative factors in existing and emerging successful examples of MC; (II) adopting contributions from collaborative networks in terms of structural and behavioral models; and (III) establishing adequate learning and performance assessment indicators and metrics.
The main contribution of this work is introducing the OGM-MCL designed to steer and support the process of the implementation, operation, and development of MCL initiatives. This paper neither is a policy paper in the strict semantics of “policy” nor does it intend to present an algorithm or software development. It is rather a kind of applied research paper and a contribution of a socio-organizational-technical nature, including an organizational and governance model, and the main aim is to provide guidance to the developers of new MCL initiatives.
The remainder of this article is structured as follows: related works are briefly reviewed in Section 2; in Section 3, the research method used for this study is explained at first, which steers both the structure of the paper and the method of presenting the contributions. Then, the proposed OGM-MCL is presented in detail. In Section 4, the approaches for demonstrating and testing the applicability of the OGM-MCL on an EU project/case study are explained. In Section 5, the process, method, and instrument used for evaluating the (validity) of OGM-MCL are described. Finally, Section 6 provides some discussion about the findings of this study and briefly looks into possible future studies.

3. Proposed Model

This section presents our proposed model, but the research method used for this work is initially explained.
For the purpose of this work, the design science research process (DSRP) approach [59,60] is adapted. DSRP is a conceptual process aimed at (a) using a systematic process for conducting (design science) research studies, (b) making the presentation of the research study clear and more understandable, (c) building the study upon prior related literature, and (d) providing a mental model for structuring the research outputs. DSRP paradigm has its roots in engineering and is basically used for the purpose of problem solving and artifact designs. DSRP aims to develop organizational and individual capabilities by the creation of innovative artifacts and the generation of design knowledge. More specifically, the DSRP helped the authors of this study to (a) incorporate principles, practices, and procedures required to carry out the research, and (b) contribute to solving (theoretically and practically) real-world problems through generating, designing, and deploying prescriptive knowledge and novel solutions (proposed model) for the identified problems. Consequently, the generated knowledge includes detailed information about the solutions and provides evidence that shows how the solutions can be effectively used in practice to satisfy the needs of the people dealing with the problems.
DSRP involves six conceptual steps, as shown in Figure 1.
Figure 1. DSRP steps.
We used DSRP to help design, develop, and validate the proposed mass collaborative learning organizational and governance model (OGM-MCL). This process was performed by following the six above-mentioned steps that are briefly explained in the following:
  • Problem identification and motivation—as pointed out in the introduction, there is insufficient organized information about the structure, main components, dimensions, and features of MCL initiatives. Thus, more input is needed that provides a basis for supporting community learning through mass collaboration. The contributions from this step focus on (1) identifying the main features and specifications of the MCL community, (2) addressing the structural and behavioral aspects of the MCL community, and (3) finding potential indicators that help assess the learning and performance of individuals and community.
  • Definition of the objectives of the solution—this step defines the objectives of the research being conducted and indicates the intended solutions. The objectives of this study are defined and explained in Section 1 and Section 2. In line with this, an extensive literature review was conducted to understand the background of the area and to consolidate what is already known about the concerned issues. In performing this analysis, the principles and models of collaborative networks were used as an underlying framework. Such a review helped us gain a better view of the existing knowledge, latest developments, and current solutions for the identified problems. Consequently, a model is proposed as the main objective, which intends to provide helpful guidelines and directions for supporting and developing the MCL initiatives.
  • Design and development—in response to the identified problems (mentioned in step 1), we proposed OGM-MCL (presented in Section 3.1). OGM-MCL is the main contribution of this study, which is inspired by the contributions and solutions reported in the literature (and partially mentioned in Section 2) in combination with our background knowledge and experience. The OGM-MCL comprises (a) three streams of work corresponding to the organizational part and (b) three phases of evaluation, representing the governance part (see Figure 2).
  • Demonstration—this step demonstrates the efficacy of the OGM-MCL in solving the problems described in the Introduction. Generally, the demonstration can be fulfilled through, for example, experimentation, simulation, application to a case study, proof, an illustration, or a project. In this study, the demonstration was taken place on an EU project. This step is rendered in Section 4.
  • Evaluation—the evaluation step compares the objectives of the solution to the actual observed results from the use of the OGM-MCL in the demonstration. In this step (that is presented in Section 5), we observe and measure to what extent the OGM-MCL can achieve its goals in the used case project.
  • Communication—the inputs and outputs of this study (including the identified problems and their importance, the proposed model, the design and development processes, and the assessment of the OGM-MCL) are shared with others through publications.

3.1. The Proposed Organizational and Governance Model for Mass Collaborative Learning

The proposed OGM-MCL in this study is a general organizational and governance model that intends to provide practical guidelines (in some ways) for researchers, designers, and developers working in this realm. However, for the application of OGM-MCL in any specific MCL initiative (e.g., farming and agricultural community, and online consultation service), the model should be appropriately adapted according to the specific objectives, requirements, and circumstances of the initiative.
Taking into account the principles and guidelines of DSRP, we proceeded to the design and development of OGM-MCL (step 3). As observed in Figure 2, OGM-MCL encompasses two main parts:
  • Organizational part—which embraces three streams of characterization work:
    -
    (Stream 1) main features of the MCL community—in this stream, the eight main dimensions of collaboration and also their related factors and features are addressed. These dimensions are suggested by considering the types and classification of the factors and features of the 15 case studies of MC mentioned above.
    -
    (Stream 2) adapting structural and behavioral models—in this stream, the proposed structural and behavioral models along with their components are introduced. They can align and relate different parts of the MCL communities and bring about consistency.
    -
    (Stream 3) learning and performance assessment indicators—in this stream, some important indicators are selected that can be used for assessing the learning of members and the performance of the MCL community. Such assessment indicators can help determine whether or not the objectives of the community have been achieved.
  • Governance part (or evaluation process)—which contains three main evaluation steps namely, adequacy, feasibility, and effectiveness. Through these steps, the three streams of the organizational part are evaluated from the adequacy, feasibility, and effectiveness point of view. This evaluation shows to what extent the addressed elements in each stream could be adapted and applied to a concrete case of MCL. More detailed information about the governance part is presented in Section 4.1.
The OGM-MCL is illustrated in Figure 2. It is a dynamic model by nature and its components and elements can be changed dynamically based on the objectives, requirements, and circumstances of the use case. OGM-MCL intends to represent some essential elements and features that should be considered in the creation, operation, and implementation of MCL initiatives and communities. Thus, in this direction, the OGM-MCL provides some guidelines and support for the decision-makers, designers, developers, and researchers.
Figure 2. Proposed OGM-MCL.
The following Section demonstrates the efficacy and application of the OGM-MCL to an EU project.

4. Demonstration

This section presents the fourth step of the DSRP method and demonstrates the application and validation of OGM-MCL in a practical manner.

4.1. Governance Process

The proposed governance process (shown as the “governance part” in Figure 2) reflects the interrelated factors, practices, relationships, and other influences on the community. The governance process from one side helps orchestrate the process of OGM-MCL demonstration, evaluation, development, and validation. On the other side, it assists in the modification and development of the specified functions of the initiative toward operation optimization. The proposed governance process represents a set of phases and steps through which several tasks will be performed to assure the effective implementation of OGM-MCL in a concrete case of MCL. The governance process consists of three phases (specification, implementation, and exploration) and eight related steps that are briefly explained in the following:
  • Specification phase—encompasses steps 1 to 6. It refers to the identification, selection, and documentation of the specific objectives, dimensions, requirements, and functions to be considered for a specific MCL initiative:
    -
    (Step 1)—Creating a list of the initiative’s objectives and outcomes required—these objectives indicate what the MCL initiative wants to achieve through applying the OGM-MCL. This task can be performed by the decision makers and developers in the initiative.
    -
    (Step 2)—Identifying and selecting the potential items (factors, features, and elements) that can be considered for the creation and development of the MCL initiative—the potential items can be identified by reviewing and analyzing the related literature, examples, cases, etc. The selected items should be then customized based on the specific objectives, requirements, and functions of the MCL initiative.
    -
    (Step 3)—Defining the main functions of the MCL initiative—the functions refer to actions’ execution and transactions of the MCL initiative. Considering the objectives and requirements of the MCL initiative, different functions can be defined by decision makers and developers.
    -
    (Step 4)—Evaluating the “adequacy” of both the selected items and the defined functions—this step first evaluates whether or not the selected items can reasonably and adequately meet the objectives of the MCL initiative. Similarly, the adequacy of the defined functions can be collaboratively evaluated in relation to the objectives of the MCL initiative.
    -
    (Step 5)—Evaluating the “feasibility” of both the selected items and the defined functions—the first part of this step of evaluation tries to uncover the strengths and weaknesses of the selected items from the feasibility point of view. The feasibility of the items can be assessed by considering the technical capabilities of the MCL initiative and the budget available for implementing them. The feasibility of the functions is pertinent to adjusting the number of functions that could be possibly performed by the MCL initiative and developing their descriptions.
    -
    (Step 6)—Evaluating the “effectiveness” of both the selected items and the defined functions—this step of evaluation first evaluates the effectiveness of selected items, aiming at reducing the number of wasted resources that are used for developing the OGM-MCL as well as reaching the desired results. The effectiveness of defined functions can be assessed through, for example, some round-group discussions or questionnaires by the decision makers and developers.
  • Implementation phase—embraces step 7. It focuses on designing, developing, and implementing the assets (the selected items and the defined functions):
    -
    (Step 7)—deals with (a) making the desired changes and justifications on selected items and then realizing and designing them, and (b) implementing (e.g., programming) the defined functions to make the functions/services available for users.
  • Exploration phase: includes the last step (8) of the governance/evaluation process. It takes care of the operation and function of the MCL initiative and the efficiency of its performance at different levels.
    -
    (Step 8)—oversees the operation of the MCL initiative and it helps make it activated and operative as a service. When the MCL initiative worked for a certain period, its efficiency should then be evaluated, for example, by taking the same procedures used in previous steps. In this step, if any problem related to different parts of the MCL initiative is detected (indicating that the MCL initiative does not work efficiently as expected), it should then be re-evaluated collaboratively.
The proposed governance process is depicted in Figure 3.
Figure 3. Governance process to demonstrate the OGM-MCL.
To demonstrate the success of OGM-MCL in solving the identified problems (mentioned above), we used the opportunity to instantiate the OGM-MCL in the ED-EN HUB project.
ED-EN HUB [61] is an Erasmus+ project co-financed by the European Union and developed by a consortium of eight institutions from five different European countries. The ED-EN HUB project aims at improving the quality of education (focusing but not limiting itself to vocational education and training) through the consolidation and systematization of the education–enterprise (in which the educational institutes and enterprises collaboratively work on knowledge and competencies development). This international cooperation alliance also focuses on the development of tools and methodologies needed for the creation of synergies between educational institutions and enterprises (at local, national, and international levels).
ED-EN HUB intends to build up a collaboration platform (EDENCP) that provides a supported training and learning environment/hub. EDENCP enables an unlimited number of distributed trainers and learners from different backgrounds to come together and build a long-term collaboration. The participants in the hub attempt to adopt new methods and develop more scenarios for sharing their knowledge, experiences, and ideas, which can help promote their level of education, qualifications, and competencies.
Figure 4 provides a picture of the approach used in our study to demonstrate the applicability and efficacy of OGM-MCL to EDENCP development. This application helped us from one side to (a) drive and support the process of implementation, operation, and management of EDENCP, and (b) modify and develop the specified EDENCP (system) functions. On the other side, (c) it improves and develops the OGM-MCL based on the provided feedback by project partners and the developers of EDENCP and (d) helps move toward OGM-MCL (model) partial validation.
Figure 4. Demonstration of OGM-MCL in the EDENCP context.
The following subsections explain and clarify the approaches and instruments used for evaluating the applicability of OGM-MCL to the ED-EN HUB project through the proposed governance process.

4.1.1. Creating a List of the EDENCP’s Objectives (Step 1)

In the first step, the following five objectives are collaboratively created (by the project partners and stakeholders) to help in setting the goals in a way that all EDENCP activities converge in one single direction:
  • (Objective 1)—determining skills requirements that address the main general and specific skills as well as transversal and transferable competencies that are applicable to both initial and continuing education (based on the principle of education industry cooperation);
  • (Objective 2)—co-designing, developing, and training—which provides guidelines and resources that can be used in training events for either people from the educational side who want to implement education–enterprise actions or for people from the business side who need to reinforce their links with the educational system;
  • (Objective 3)—detecting, assessing, and clarifying (policy recommendations)—which represents the synthesis of the experiences developed during the project in the five regions targeted by ED-EN HUB. The recommendations will be drafted by classifying and comparing the main policy objectives to which the creation of the joint structure is associated, the range of developed functions and concrete results achieved, difficulties in the start-up, funding and collaboration models, solutions found in consolidating collaboration, models of public-private collaboration, and governance developed;
  • (Objective 4)—creating career guidance that presents a complete and secured accompanied pathway for people from their first choice of orientation (at the age of 14, when compulsory education positions the pupil with regard to choosing) to professional reorientation, through the question of guidance and training in collaboration between the stakeholders in education/training and the company;
  • (Objective 5)—organizational benchmarking (benchmarking process description)—which clarifies what collaboration activities are expected to take place by means of the EDENCP.

4.1.2. Identifying and Selecting the Potential Factors, Features, and Elements (Step 2)

As mentioned in Section 2.1, in order to identify and select the potential factors, features, and elements (which could be implemented on EDENCP), the structures, models, and methods used in 15 cases of MC are reviewed, analyzed, and then summarized. Afterward, the selected items are accommodated in OGM-MCL and then customized for application to the EDENCP. Through the process of OGM-MCL customization for EDENCP, the partners and stakeholders attempt to use the proposed items or alter them (if needed) to suit the EDENCP’s preferences or requirements. For example, instead of supporting open access to the platform for all people, some access restrictions are considered for EDENCP.

4.1.3. Defining the Main Functions of EDENCP (Step 3)

The following eight functions are collaboratively defined for EDENCP by the project partners and stakeholders:
  • (Function 1)—developing an appropriate search engine;
  • (Function 2)—determining the aspects, components, and features of collaboration;
  • (Function 3)—managing training process;
  • (Function 4)—providing training execution support;
  • (Function 5)—designing curriculum;
  • (Function 6)—inserting new competence demands;
  • (Function 7)—providing suitable tools to evaluate the performances;
  • (Function 8)—providing a proper database/service that introduces and offers promising, validated, and trusted tools.
These functions refer to EDENCP functionality, ability, capabilities, and features that all together will provide the defined services in accordance with the specifications as set out in the project.

4.1.4. Evaluating the Adequacy of Selected Factors, Features, and Elements (Step 4)

To evaluate the “adequacy” of OGM-MCL for usage in EDENCP, a number of positive factors and specific features (mentioned in Section 2.1) that have potential applicability are picked out. To evaluate and benchmark the adequacy and importance of the nominated factors, features, and elements (items), they are addressed in 100 questions, forming the adequacy questionnaire (see Appendix A).
Each question in the questionnaire represents a potential item that might be used on the platform. The questions—based on the specifications and characteristics that they present—are classified under nine considered dimensions of collaboration namely, organizational, environmental, admission, behavioral, social, structural, functional, technological, and economical. This classification facilitates the presentation, analysis, and interpretation of the results of the evaluation.
The adequacy and importance of the selected items (to be considered for EDENCP) are asked and assessed by a checklist in the questionnaire. There are six possible answers in the checklist for each question, namely, strongly disagree (SDA), disagree (DA), agree (A), strongly agree (SA), I don’t know, and I’m not sure (now). The evaluators (partners, constituting a kind of “focus group” in terms of the Design Science Method) not only can choose one of these possible answers, but they can also insert comments and feedback (if needed) about each addressed item in each question. It is noteworthy to mention that this questionnaire provides a form of global evaluation of the considered dimensions and their respective items.
The questionnaire was sent to each partner of the ED-EN HUB consortium, and they were asked to respond to the questions collaboratively (with their internal involved members who are experienced in this field of study and work). Therefore, the questions in each questionnaire were answered via the collaboration and confluence of different minds rather than a single partner. This strategy not only helped reduce the number of questionnaires that were sent out, answered, and evaluated but also increased the accuracy and value of the given answers.
The main results of this step of evaluation (average of the popularity of adapted items for implementation on EDENCP) achieved from analyzing the five received questionnaires are summarized in Table 1. In the method for analyzing the obtained data from respondents and calculating the statistical answers given to the questions (addressed in the questionnaire), a decision was made to give weight to each answer in the checklist. The attributed weights are as follows: (SDA = 1), (DA = 2), (A = 3), (SA = 4), (I don’t know = 0), and (I’m not sure = 0). In the calculation, each answer (for a single question) was first multiplied with the attributed weight and then they were summed up and lastly divided by the total number of respondents. The received responses were analyzed manually.
Table 1. Average of popularity given to the dimensions in adequacy evaluation.
To obtain an improved perspective of the results of this step of evaluation, they are also displayed as a radar chart in Figure 5.
Figure 5. Average popularity of 9 dimensions obtained from adequacy evaluation.
Taking into account the given responses to the questions of the adequacy questionnaire (illustrated in Table 1 and Figure 5) and from the performed analysis, the following can be concluded:
  • The nine considered dimensions are generally accepted by all evaluators (partners) because the average popularity given to all dimensions is above 50% (an indicator of acceptance).
  • Among the considered dimensions, the organizational dimension and its respective items received the highest average of popularity (83.57%), whereas the economical dimension received the lowest average of popularity (58.50%) from the respondents’ point of view.
  • Analyzing all responses given to every single question (in all dimensions) shows that some of the selected and adapted items (that the average of their popularity is lower than 50%) need to be revised, improved, changed, or omitted (in some cases) before moving to the next phase of evaluations. In this direction, the provided feedback by the partners offered very good ideas of what other important points need to be addressed in further developments.

4.1.5. Evaluating the Adequacy of Functions (Step 4)

After evaluating the adequacy of the selected items, the defined functions for the EDENCP were also evaluated (from the adequacy point of view) during some plenary meetings of the partners and stakeholders. This evaluation focuses on judging whether or not the functions can adequately meet the objectives of EDENCP. At this step, the evaluation of the adequacy of the functions is performed at the conceptual level.
The results of evaluating the adequacy of defined functions are shown in Table 2. Considering the available information and focusing on theoretical and conceptual evaluation, the functions that show one or some signs and indications of adequacy for meeting one or some objectives of EDENCP are marked with (X) in Table 2. This table addresses the eight defined functions and the five considered objectives for EDENCP. This step of evaluation also provides a view of the potential interactions among the EDENCP’s functions and objectives.
Table 2. The results of evaluating the adequacy of EDENCP functions.
The results of function evaluation give direction to function creation, development, and implementation. These results are used as a base for function feasibility evaluations.

4.1.6. Evaluating the Feasibility of Selected Factors, Features, and Elements (Step 5)

In this step of evaluation, the technical team of the project (from NOVA University Lisbon) attempted to assess and judge the possibility, ability, and “feasibility” of selected factors, features, and elements in supporting the EDENCP with the least amount of wasted time, money, and effort. In this manner, they tried to provide a fact-based understanding of the current level of the OGM-MCL’s feasibility and maturity. The gained insight was enriched by employing both knowledge-based questions and application-based techniques. In this process, not only the technical feasibility of OGM-MCL is evaluated based on available budget and technical capabilities but the technical risks of OGM-MCL application are also identified, quantified, and reported. In this step, evidence-based evaluations and data-driven decisions are also helpful strategies that were taken into consideration.
Afterward, the technical team proceeded to further assessment of the selected factors, features, and elements according to the EDENCP functional requirements. Hence, the second questionnaire is collaboratively developed to collect the opinion of partners about the judgment of the technical team. In this questionnaire, nine related questions were designed for each function, addressing the most important technical aspects (factors, features, and elements) that should be evaluated for feasibility assurance. The formulated questions are polar questions (Yes or No questions), and they are illustrated in Appendix B.
The results gained from the primary round of questionnaires (used for evaluating the adequacy of developed and adapted factors, features, and elements) are used in the project as a base for “feasibility” evaluation. That is, those addressed factors, features, and elements (in the questions) that received high popularity and their threshold is ≥80% are selected for consideration, further evaluation, and probable implementation on EDENCP.
Table 3 summarizes the results of the adequacy evaluation (shown in Table 2), plus the number of questions (used in the questionnaire) per dimension, and the number of questions per dimension for which its threshold is ≥80. A threshold of 80 was specifically suggested for the selection of potential items as it could create a balance between the number of factors, features, and elements addressed in the questionnaire and the sum of items needed for feasibility evaluation in the next stage. In other words, in the adequacy evaluation (step 4), 100 factors, features, and elements were addressed in the questionnaire (one item per question). The number of addressed items for application to the EDENCP is relatively high, aiming to provide a reasonable number of potential items for selection and also providing a chance for the evaluators to select the items from the list that are the best from their perspective. Thie is because, from a feasibility point of view, it is not cost-effective to implement all those items in the EDENCP. It is worth noting that the considered threshold (≥80, which is relatively high) caused a significant reduction in the number of considered factors, features, and elements (that should be used in the next steps of evaluation). This level of threshold and high percentages of popularity, however, provide a certain degree of assurance for feasibility consideration.
Table 3. Results of adequacy evaluation and considered threshold that were used for feasibility evaluation.
The graph illustrated in Figure 6 provides an improved perspective of the results presented in Table 3.
Figure 6. Results of adequacy evaluation and considered threshold that were used for feasibility evaluation.
Taking into account the considered threshold and the gained results, the following should be highlighted:
  • The dimension of admission has the highest percentage of popularity at (75%), followed by the organizational dimension with (71.42%). However, the lowest percentage of popularity (given by the evaluators) belongs to the economical dimension (20%).
  • The low percentage of the popularity of some dimensions (in both situations, before and after considering the threshold), namely, economical, functional, and behavioral, shows that these dimensions did not receive high attention (in comparison with other considered dimensions) from the evaluators’ point of view.
  • Those dimensions that gained higher percentages of popularity, namely, admission, organizational, and technological, indicate that they have a high potential of feasibility (from the evaluators’ perspective) to be implemented in EDENCP. Thus, the focus of attention should be given to these dimensions.
In the second stage of feasibility evaluation (for the items), the questionnaire (presented in Appendix B) was sent to the same group of partners (who participated in the adequacy evaluation), but at this time, 18 evaluators individually responded to the questions, because the partners and stakeholders decided to come up with a mix of group and individual evaluations. The results achieved from the feasibility questionnaire are presented in Table 4. Those factors, features, and elements (that are highlighted with gray color) received popularity by over 60%. They are selected as potential items for the next step of evaluation (effectiveness). At this time the threshold of 60 was suggested by the partners and stakeholders, as it is believed that it can reasonably adjust the number of considered items for effectiveness evaluation. This means that we attempted to narrow down the results with different steps of evaluation and minimized the number of considered items to a logical number of possible and feasible items that can be integrated into EDENCP.
Table 4. Results of the second step of feasibility evaluation.
Taking into account the results shown in Table 4, the following should be underlined:
  • The low average of popularity given to the economical dimension (16.66%) and technological dimension (41.26%) shows that the majority of addressed factors, features, and elements in these two dimensions do not have much chance to be implemented in the EDENCP from the feasibility point of view.
  • The selected factors, features, and elements should be prioritized in the process of implementation on EDENCP. That is, those that received a higher percentage of popularity (e.g., factors, features, and elements in the social dimension used for function 5, with 100%) need to be given more attention and emphasis.
  • The dimensions also should be prioritized in the process of implementation in EDENCP. For example, among the addressed dimensions, the environmental dimension received a higher percentage of popularity (68.25%). In this manner, developers and decision makers can manage the resources for implementation according to dimensions’ feasibility and popularity.

4.1.7. Evaluating the Feasibility of Functions (Step 5)

According to the decision of partners and stakeholders, the evaluation of the “feasibility” of defined functions led to developing (collaboratively) their description. In addition, further function evaluation caused a merging of the first function (search engine) with the last function (database/service offering different tools). The first function is eventually changed to “search engine for finding information and tools”. Therefore, the number of functions reduced to seven, as addressed in Appendix B.
Following the collective evaluation of the defined function, their definition is developed as presented below:
  • (Function 1)—search engine for finding available information and tools on EDENCP: provides a software system that is designed to carry out specific searches related to particular competencies (of members), courses, activities, and supporting tools.
  • (Function 2)—collaboration: deals with collaborative practices whereby some members work together to complete a task, solve a problem, and/or achieve a shared goal.
  • (Function 3)—managing training: focuses on flexible strategies that can properly manage different aspects of training from program creation to evaluation and prioritizing learning needs.
  • (Function 4)—training execution support: provides the needed support for (a) training execution, (b) learning engagement strategies, and (c) implementation of performance-based assessment for training proficiency.
  • (Function 5)—designing curricula: is a cyclical, and analytical process that helps create a training framework that is able to facilitate and guide the creation of training programs in a particular domain. It integrates different training elements such as learning strategies, processes, materials, and experiences that may help design and develop such training program instructions.
  • (Function 6)—the insertion of new competencies demands is the process of identifying and adding the key competencies and basic skills (e.g., cognitive skills of critical thinking, problem-solving, and interpersonal skills) required to perform teaching and training with success. This function supports the identification of competencies that are highly demanding for the companies. It is designed for three main situations: (a) when an employer recognizes that his company needs new workers, but the new workers only arrived from the university and do not have the specifically needed competencies; (b) when the employer recognizes that the existing workers need to improve their competencies or gain new ones for the specific tasks; or (c) when the worker recognizes (by himself/herself) that he/she needs to gain some particular competencies. The main consequences of this function are to improve the existing curricula or add new ones based on the demands of the companies.
  • (Function 7)—tools to evaluate the performances (benchmarking): This is the function to evaluate the performance of (a) EDENCP in relation to its functions and (b) the workers of the companies against the transversal competencies that they have already gained. For doing so, it first needs a definition of some specific key performance indicators (KPIs) related to each performance.

4.1.8. Evaluating the Effectiveness of Selected Factors, Features, and Elements (Step 6)

In this step of evaluation, the technical partners of the project proceeded to assess the “effectiveness” of the selected factors, features, and elements, aiming to judge the degree of their success in achieving the objectives of EDENCP (mentioned in Section 4.1.1). This task is concerned with comparing (at this stage theoretically) the inputs of OGM-MCL with the desired outputs that it can deliver. Therefore, to find out whether or not the selected items are effective in obtaining the expected results, from the feasibility point of view, a third questionnaire is created. The evaluators/partners are asked to provide a rating for each question, showing how much the considered items are effective (from their perspective) in reaching the goals of OGM-MCL. As shown in Appendix C, the considered rates include not at all effective (rate 0), slightly effective (rate 1), moderately effective (rate 2), very effective (rate 3), and extremely effective (rate 4).
Considering the results shown in Table 4, items with a popularity of over 60% were selected for effectiveness evaluation, meaning that the items were evaluated at this time from an effectiveness perspective. In this step, the evaluators (partners and stakeholders), by giving a rate to each item, attempted to determine how much the addressed items are effective in the utilization implementation of EDENCP. Table 5 presents the results of this step of evaluation from five received questionnaires.
Table 5. Results of effectiveness evaluation.
As shown in Table 5, except for the economical dimension (which reflected very low average popularity in the previous step of evaluation, so it was not considered in this step), the other dimensions have at least one item for effectiveness evaluation. From the results of this evaluation, the following can be said:
  • Generally, the results of this step of the evaluation show how effective the dimensions are from the evaluators’ perspective. Among the addressed dimensions, the organizational dimension obtained the highest average popularity (90%), whereas the technological dimension received the lowest average popularity (75%).
  • Those dimensions and their related items with a percentage of popularity (gained from this evaluation) lower than 80% (the considered threshold for this step) are not regarded as sufficiently effective items to be implemented on the EDENCP. This means that the items with a percentage of popularity at 75 (highlighted with the gray color in Table 5) were all taken out from the list of considerations.
  • In function implementation (step 7), dimensions that have a higher percentage of popularity should be prioritized. For example, in the implementation of function 1, the priority (based on available resources and capabilities) should be given to organizational (95%) and structural (95%), admission (90%), and social (80%) dimensions.

4.1.9. Evaluating the Effectiveness of Functions (Step 6)

In some online and face-to-face plenary meetings that the focus group had together, they held several discussions and arguments about the effectiveness of EDENCP’s functions. In these meetings, they shared their views and opinions on the following related questions:
  • Can the functions achieve the desired/targeted goals?
  • Can the functions gain a certain degree of success?
  • Can the functions produce the desired effect?
  • Can the functions be operated according to the project plan?
By critically assessing different aspects of functions’ effectiveness, the partners finally came to the conclusion that the existing (theoretical) evidence introduces a convincing impression of the effectiveness of the functions. That is, they decided to keep the functions as they are until reaching the results of efficiency evaluation (step 9). Then, after, they can accordingly make the needed decisions and actions if needed.
It should be added that in the specification phase (steps 1–6), some strategies are taken into consideration, namely, engaging the partners at different stages of evaluation, clarifying the evaluation process, collaboratively designing the process of evaluation and developing the questionnaires, gathering credible evidence, justifying the conclusions, and using and sharing lessons learned. The next step presents the implementation phase.

4.1.10. Adjusting the Selected Factors, Features, and Elements (Step 7)

An adjustment transaction makes the required changes to item values that can no longer be directly changed after the items are selected. Here, adjustment refers to a process that makes the slightly needed changes to the selected factors, features, and elements, especially to make them more correct, effective, and/or suitable for the implementation step.
The selected items are then used in designing, building up, and developing the EDENCP. The EDENCP has some tailored and customized features, including but not limited to optimization for search and the social web, easy mobile navigation, unlimited file storage, and promotion opportunities. The selected items and the developed functions will together shape and put up the EDENCP services for the end-users.

4.1.11. Implementing the Functions (Step 7)

The functions were implemented in EDENCP in programming languages by translating the definition of the function into a more machine-friendly form, which can either be directly executed by the machine/system or interpreted by another program that runs on the machine/system. The seven defined functions provide different services for different users at three main levels as mentioned below:
  • Individual/student level—provides services such as job seeking, internal and external mobility, certification, guidance, reconnection to the market, self-assessment, and social recognition.
  • Company/enterprise level—delivers services such as job and skill planning, internal mobility, recruitment, planning internships, sharing the vision with training and education actors, co-training, and developing partnerships.
  • Educational institutes level—provides services such as planning internships, finding required skills, meeting societal and employment actual and future needs, sharing the vision with companies, co-training, and developing partnerships.
Figure 7 provides a snapshot of the EDENCP environment. It demonstrates that the EDENCP is designed to deliver various services to different types of users.
Figure 7. EDENCP environment is customized for different types of users and services.
It should be highlighted that the exploration phase is still in progress in the ED-EN HUB project at the stage of concluding this study. However, the results of its evaluation will be reported in future publications.

5. Evaluation

This section presents the fifth step of the DSRP method. Following the demonstration of OGM-MCL, the results of evaluating its validation to be used for EDENCP are presented in the following subsection.

Validation of Organizational and Governance Model for Using in EDENCP

In this study, the OGM-MCL is evaluated by considering and adopting the Technology Acceptance Model (TAM) methodology [62]. TAM is one of the most frequently employed models for research into new information technology acceptance. TAM is focused on the intention to use a new technology or innovation. TAM is specifically designed to measure, explain, and predict the acceptance and adaptation of information and communication technologies based on customer/user attitudes. A number of factors influence the decision of customers/users, such as perceived usefulness and perceived ease of use.
Furthermore, the validation and appropriateness of the OGM-MCL to be used in the ED-EN HUB project were initially evaluated by taking the following steps:
  • Determining the objectives of OGM-MCL validation;
  • Determining the needed tools for evaluation (questionnaires and interviews);
  • Determining the suitable criteria and parameters for evaluation of the OGM-MCL appropriateness (completeness, purposefulness, perceived usefulness, perceived ease of use, cost-effectiveness, and reasonability);
  • Developing related survey questionnaires;
  • Identifying the potential evaluators (experts, partners, and stakeholders in the projects);
  • Preparing and conducting required interviews with evaluators;
  • Performing validation tests;
  • Analyzing the collected feedback;
  • Reporting the results of the validation.
The developed questionnaire for this purpose, which is presented in Table 6, contains 11 questions, addressing the considered validation criteria and parameters mentioned above.
Table 6. Questionnaire used for evaluating the validation and appropriateness of OGM-MCL.
It is worth nothing that in line with the core constructs used for the TAM, the validation criteria and parameters are set by partners and stakeholders of the project with respect to the proposed criteria and parameters in the related literature, the strategic objectives of the project, and their expectations for EDENCP. The questionnaire contains six criteria and parameters and eleven questions. Each question should be rated on a 6-point Likert scale (Likert, 1932) including strongly disagree (SDA), disagree (DA), agree (A), strongly agree (SA), I don’t know (IDK), and I am not sure (IANS). The Likert scale questions are formulated to understand the level of agreement of respondents (project partners and stakeholders, i.e., a “focus group”) with the appropriateness of OGM-MCL. The questionnaire was sent to eight groups of focused partners/experts, and they are asked to respond to the questions with the collaboration of their internal team members who contribute to the project. The results of analyzing their answers/opinions are presented in Table 7. It should be added that this questionnaire was made and analyzed by “Survey-Monkey”.
Table 7. Results of evaluating the validation and appropriateness of OGM-MCL for EDENCP.
Taking Table 7 into account, the following can be stated:
  • All considered criteria and parameters for evaluating the validation and appropriateness of OGM-MCL obtained a percentage over 50 (average), which is a reasonable indicator of general acceptance;
  • Among the eleven questions addressed in this survey questionnaire, only three questions that are related to the criteria “perceived ease of use” and “cost-effective” had a percentage between 56% to 63%. The other eight questions had a percentage ≥ 68.75. The average percentage given to all criteria and parameters is 72. This shows a convincing indicator of model compliance.
  • The given answers/feedbacks show that there is no “strong disagreement” for the addressed points (criteria and parameters). In fact, there are only seven “disagreements” in total, which is not high.
  • Taken together, there were 39 “agreements” and 28 “strong agreements”, which is a considerable positive attitude towards the addressed criteria and parameters and also the validation and appropriateness of the OGM-MCL.
  • Taken together, there are only 10 answers that claim that “I am not sure” and 4 answers that said, “I don’t know”. Indeed, this rate is not high at all.
Due to the fact that the OGM-MCL is evaluated theoretically and conceptually at this stage in this project, a percentage of disagreement, ambiguity, and uncertainty is understandable.
It should be noted that the OGM-MCL can be applied to a specific MCL case either fully or partially. This decision should be made by the decision makers in the target case. In the ED-EN HUB project, all parts addressed in the model are potentially considered to be applied to the EDENCP.

6. Discussion and Conclusions

This Section first provides a brief comparison of the proposed approach with previous studies; then, it provides an interpretation according to the main findings and contributions of this work. Afterward, it summarizes the hypothesis and purpose of the study, and at the end, it provides some conclusions by addressing the main limitations of the study as well as pointing out directions for future studies.
Collaborative learning initiatives are evolving and growing faster than ever. MCL, for example, has opened immense opportunities for ordinary people in societies to engage locally or globally in vocational and informal learning practices. This approach of collective learning, which is addressed by people of different areas, is viewed and presented from different perspectives and for different purposes by the researchers of each discipline. Even so, this specific complementary approach to learning and wonderful experience has not yet been revealed in an integrated perspective to all people worldwide [3]. Nevertheless, in the last decade, numerous studies have been published on MCL environments and a number of these studies focus mostly on specific models (e.g., network models), techniques (e.g., network analysis techniques), and approaches (e.g., network-based approaches) to support the supervision of online learning communities (e.g., MCL) [63]. On the other side, to cope with the current information explosion and information overload (which is a common and challenging issue in the MCL environment), the authors of [12] tried to take the advantage of the application of natural language processing (NLP). Another study [64] proposed first-order probabilistic reasoning techniques (that benefit from machine-learning techniques) to estimate the quality of knowledge (e.g., consistency, relevance, and scalability) created and shared by mass collaborative efforts.
Despite the advances made during the last years in this emerging and promising method of learning, there are still some ambiguities or uncertainties in the way of creating, organizing, developing, and governing MCL initiatives. This study is in line with previous studies (that is addressed above to a limited extent) since it proposes the OGM-MCL framework and attempts to address a number of important organizational and governance elements that can be considered by the decision makers, designers, and developers when involved in the creation, operation, and development of MCL initiatives.
In our previous study [65], different parts and components of OGM-MCL were introduced and elaborated on in detail. In the current study, aside from the representation of the proposed model and applied methodology, we tried to report the results gained from the practical evaluation, validation, and application of the OGM-MCL. To demonstrate and evaluate the validity and applicability of OGM-MCL (by taking into account the DSRP), the model has been used in the implementation of the EDENCP environment in the ED-EN HUB project. The contribution of OGM-MCL to this European research project provided a set of reliable assessments, reasonable validations, and successful applications. The ED-EN HUB project has applied OGM-MCL to support the creation, operation, management, and implementation of their digital collaboration platform [65] even though OGM-MCL was first customized collaboratively by the partners and stakeholders of the project based on the objectives, requirements, and conditions of each project.
By employing the governance process, different aspects of OGM-MCL have been assessed by (a) taking some phases and steps of evaluation, (b) using some developed questionnaires, (c) continuous interaction and discussion with various partners and stakeholders of the project (“focus group”), and (d) collaboration with several expert evaluators. The OGM-MCL was globally accepted by the partners and stakeholders of the project, suggesting that the validation of the OGM-MCL and its potential can be applied to other related projects and cases. Additionally, close interaction with the experts, researchers, and stakeholders in the project was a very valuable opportunity for the authors, as their useful feedback and constructive suggestions provided some direction for improving different parts of the OGM-MCL.
Through the validation and application of OGM-MCL and its related solutions in the project, the hypothesis is validated to a reasonable extent. Thus, it can be concluded that community learning could be effectively supported through MC when three work streams are properly rooted in the foundation of a community: namely, by (I) identifying the positive and negative factors in existing and emerging successful examples of MC; (II) adopting contributions from collaborative networks in terms of structural and behavioral models; and (III) establishing adequate learning assessment indicators and metrics.
Despite the positive and promising features of MCL and the opportunities it can provide for societies, communities, and learners, MCL (as a type of social network) also faces a huge number of challenges and limitations. For example, in general, MCL must deal with the challenge of determining and controlling the quality and reliability of shared content (e.g., knowledge, information) within the community. In fact, mass collaboration is regarded as a double-edged sword when it comes to learning. On one hand, it is relatively low-cost and accessible, and it can potentially facilitate knowledge sharing and increase public awareness. On the other hand, the (large) size and environment (online) of the community can potentially place it at risk in terms of encountering, involving, abusing, and receiving damage from unreliable content. In addition, the anonymity of community participants can likely intensify the problem. Since MCL is typically supported by a public platform, any participant can post any content with various degrees of veracity. Spreading unhealthy content throughout the community can negatively influence its members (e.g., by being misinformed or misled). Thus, it is left to community members to recognize whether the content is true or not. Unfortunately, this is a dark side of MCL. In particular, the ED-EN HUB project faced some limitations in the evaluation of OGM-MCL:
  • We identified that some concepts (e.g., MC, MCL, OGM-MCL, and governance process) are often vague and confusing for the partners and stakeholders of the project
  • Theoretically and conceptually evaluating different aspects of the OGM-MCL is an arduous and daunting task.
Considering the pioneering nature of this research work, it becomes clear that many doors are open for future research. MCL, as an interdisciplinary approach, is a promising and demanding subject with multiple potential areas of application. Although several aspects in the MCL context (e.g., organizational and governance elements) are already identified, they need to be improved, applied to different contexts, and consolidated by further validation. Furthermore, in the process of evaluation, further validation, and application of OGM-MCL in other case studies, using different approaches, techniques, and tools might help cover up its weaknesses. In future work, the remaining phase and step of the governance process will be refined. Furthermore, in the project, we will proceed with deep data analysis through the process of inspecting and evaluating the implemented platform by using feedback data from users. As an example, the search engine function may acquire feedback from users in relation to the obtained results, if they were effectively found, etc. The result of the data analysis against the improvements made will then be published in future studies. This activity will bring us one step closer to the full validation of the OGM-MCL. In addition, the validity and applicability of the OGM-MCL will be evaluated by using other projects, case studies, and/or illustrations.

Author Contributions

Conceptualization, M.Z., J.S. and L.M.C.-M.; methodology, M.Z., J.S. and L.M.C.-M.; validation, M.Z., J.S., L.M.C.-M. and R.J.-G.; formal analysis, M.Z., J.S. and L.M.C.-M.; investigation, M.Z., J.S., L.M.C.-M. and R.J.-G.; resources, J.S. and R.J.-G.; data curation, M.Z., J.S., L.M.C.-M. and R.J.-G.; writing—original draft preparation, M.Z., J.S., L.M.C.-M. and R.J.-G.; writing—review and editing, M.Z., J.S. and L.M.C.-M.; visualization, M.Z., J.S., L.M.C.-M. and R.J.-G.; supervision, J.S., L.M.C.-M. and R.J.-G.; project administration, J.S. and R.J.-G.; funding acquisition, J.S., L.M.C.-M. and R.J.-G. All authors have read and agreed to the published version of the manuscript.

Funding

Fundação para a Ciência e Tecnologia (project UIDB/00066/2020) and European Commission ERASMUS + through grant n° 2020-1-FR01-KA202-080231 ED-EN HUB.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data are presented in the main text.

Acknowledgments

This study was supported by the Center of Technology and Systems (CTS-UNINOVA).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Part of the questionnaire was used to evaluate the adequacy of OGM-MCL in its application on EDENCP.
Table A1. Part of the questionnaire was used to evaluate the adequacy of OGM-MCL in its application on EDENCP.
The Questionnaire Used for Assessing the Adequacy of OGM-MCL on EDENCP
Considered ElementsMain Features That Might Be Integrated into EDENCPChecklist
Organizational dimensionIt is important that the EDENCP be a user-driven service (users may be co-creators of the service). Applsci 12 08356 i001Applsci 12 08356 i002Applsci 12 08356 i003Applsci 12 08356 i004
SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP engages diverse groups (e.g., the general public, experts, and professionals) in the process of learning.SDADAASA
I don’t knowI’m not sure
Environmental dimensionIt is important that the EDENCP be open for all people to contribute. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP provides three levels of access (for three groups of users: partners, administrators, and general users). SDADAASA
I don’t knowI’m not sure
Admission dimensionInclusion
It is important that the EDENCP facilitates the process of joining (inclusion) the groups, hubs, and communities.SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP provides free access for all users.SDADAASA
I don’t knowI’m not sure
Accessibility and Proximity
To promote the quality of contributions and develop transparency, it is important that the EDENCP reduces anonymity. SDADAASA
I don’t knowI’m not sure
It is important that the username be associated with the user’s contributions (to facilitate the monitoring of contributions). SDADAASA
I don’t knowI’m not sure
Social dimensionCollaboration
It is important that the EDENCP builds a network for career development. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide a "discussion forum" for collaboration.SDADAASA
I don’t knowI’m not sure
Functional dimensionContent Management
It is important that the users could support the process of creating, sharing, and developing the content.SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could continually develop and update the content.SDADAASA
I don’t knowI’m not sure
Operation Management
It is important that the EDENCP could save users’ personal information and contributions to their profiles.SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide a "monitoring system" to constantly monitor the transactions. SDADAASA
I don’t knowI’m not sure
Interaction Management
It is important that the EDENCP could provide an appropriate service for internal interactions such as sharing the resources, training, and learning materials. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide an appropriate service for external interactions such as exchanging expertise and findings. SDADAASA
I don’t knowI’m not sure
Human Resource Management
The users should be treated equally. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide an advisory board.SDADAASA
I don’t knowI’m not sure
Economical dimensionSupports and Services
How important do you think the following 2 services could be for the economic sustainability of the platform:
Benefiting from private and public funding, grants, financial aids and donations, capital from investors and sponsors, and advertising. SDADAASA
I don’t knowI’m not sure
Providing supportive training and learning services for schools, organizations, institutions, businesses, and companies. SDADAASA
I don’t knowI’m not sure
Technological dimensionIt is important that the EDENCP could provide web-based communication. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide a search engine that helps participants to find particular information and services provided in the hub. SDADAASA
Structural dimensionParticipants
It is important that users from any age, background, culture, and gender could contribute to EDENCP. SDADAASA
I don’t knowI’m not sure
Users will not be paid and they will contribute to the volunteer base. SDADAASA
I don’t knowI’m not sure
Roles and Tasks
It is important that the users could play different roles (e.g., expert, advisor, trainer, trainee, editorial, researcher, technical, managerial) based on their qualifications. SDADAASA
I don’t knowI’m not sure
It is important that the users could engage in multiple tasks (e.g., training execution, providing learning content, delivering the contents, exchanging the contents, executing, providing support, commenting, reporting). SDADAASA
I don’t knowI’m not sure
Behavioral dimensionThe partners and administrators have the authority to bring about structural changes in the EDENCP SDADAASA
I don’t knowI’m not sure
The general users can contribute to decision-making processes.SDADAASA
I don’t knowI’m not sure

Appendix B

Table A2. Part of the questionnaire was used to evaluate the feasibility of OGM-MCL in its application on EDENCP.
Table A2. Part of the questionnaire was used to evaluate the feasibility of OGM-MCL in its application on EDENCP.
The Questionnaire Used for Assessing the Feasibility of OGM-MCL on EDENCP
FunctionsConsidered Factors and Features That Might be Implemented on EDENCP Answers
F1It should be open to all and be used for multiple purposes (e.g., searching, sorting info)YESNO
It should have different access levels and provide a comprehensive list of available infoYESNO
F2It should be able to engage different members in multiple communication tasks YESNO
It should be able to create collaboration spaces around different domains/topicsYESNO
F3It should be able to use different characteristics to manage the training planningYESNO
It should be able to consider the use of different training modes in the training planningYESNO
F4It should be able to define clear inclusive rules for admissionYESNO
It should be able to provide Learning Management System (e.g., moodle)YESNO
F5It should be able to create diverse groups profile for contributionYESNO
It should be able to make open to all the processes of curriculum designYESNO
F6It should make open to all the insertion of new competencies demandsYESNO
It should be able to suggest concepts for competence demand writingYESNO
F7It should be able to authorize users for taking different roles in the evaluation processYESNO
It should have easy and free access to verification of the evaluation resultsYESNO

Appendix C

Table A3. The questionnaire was used to evaluate the effectiveness of OGM-MCL in its application on EDENCP.
Table A3. The questionnaire was used to evaluate the effectiveness of OGM-MCL in its application on EDENCP.
Questionnaire for Evaluating the Effectiveness of Considered Dimensions of Collaboration and Their Related Factors, Features, and Elements that Have the Potential to be Implemented on EDENCP
1. Questions for Search Engines for Finding Information and Tools01234
1. It is effective to be used for multiple purposes (e.g., searching, sorting info)
2. It is effective to be flexible to change (e.g., open to add new info or keywords for searching)
3. It is effective to have cross-functional capabilities (e.g., searching multiple features)
4. It is effective to be customizable (e.g., contains searching features to adapt to users’ profiles)
5. It is effective to use advanced functions (e.g., search in other external hubs)
2. Questions for Collaboration
1. It is effective to engage a diverse group of people in multiple communication tasks (e.g., chat forum)
2. It is effective to create collaboration spaces around different domains/topics
3. It is effective to promote the available skills or make a set of needed competences
4. It is effective to invite users to voluntarily contribute to collaborative activities based on their interest
3. Questions for Managing Training
1. It is effective to use different characteristics to manage the training planning (e.g., cost, success rate, learning needs)
2. It is effective to use different training modes (blended learning) in the training planning
3. It is effective to consider inclusive training and learning
4. It is effective to give freedom to choose the courses (help to prioritize learning needs based on the user’s profile)
5. It is effective to create a training plan such as evaluation of the contents collectively (have a procedure to perform)
4. Questions for Training Execution Support
1. It is effective to analyze what kind of approach or balance can be best used in training execution for different training modes (virtual vs. traditional classes)
2. It is effective to define clear inclusive rules for admission
3 It is effective to use analytics over-collected performance data to improve further training execution and its learning engagement strategies (i.e., knowledge management)
5. Questions for Design Curriculum
1. It is effective to create diverse groups profile for contribution
2. It is effective to design a curriculum based on the demands of companies
3. It is effective to share training elements (e.g., learning strategies, processes, materials, and experiences) for collaborative designing curriculum
4. It is effective to facilitate the process of continuous curriculum adaption
6. Questions for Insertion of New Competencies Demands
1. It is effective to make open to all the insertion of new competencies demands
2. It is effective to create an easy invitation procedure for the contribution of new companies
3. It is effective to encourage volunteer and active participation in finding new competence demands
4. It is effective to facilitate open discussion about the new competence demands
7. Questions for Tools to Evaluate Performance
1. It is effective to create different levels of evaluation
2. It is effective to authorize users for taking different roles in the evaluation process
3. It is effective to have easy and free access to verification of the evaluation results
4. It is effective to create different roles for the evaluation procedure (e.g., evaluators, users, programmers)

References

  1. Zamiri, M.; Camarinha-Matos, L.M. Mass Collaboration and Learning: Opportunities, Challenges, and Influential Factors. Appl. Sci. 2019, 9, 2620. [Google Scholar] [CrossRef]
  2. Camarinha-Matos, L.M.; Afsarmanesh, H. Collaborative networks: A new scientific discipline. Intell. Manuf. 2005, 16, 439–452. [Google Scholar] [CrossRef]
  3. Cress, U.; Jeong, H.; Moskaliuk, J. Mass Collaboration as an Emerging Paradigm for Education? Theories, Cases, and Research Methods. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  4. Zamiri, M.; Camarinha-Matos, L.M. Organizational Structure for Mass Collaboration and Learning. In Technological Innovation for Industry and Service Systems; DoCEIS 2019. IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2019; Volume 553, pp. 14–23. [Google Scholar] [CrossRef]
  5. West, R.E.; Williams, G.S. I don’t think that word means what you think it means: A proposed framework for defining learning communities. Educ. Technol. Res. Dev. 2017, 65, 1569–1582. [Google Scholar] [CrossRef]
  6. Zamiri, M.; Camarinha-Matos, L.M. A Mixed Method for Assessing the Reliability of Shared Knowledge in Mass Collaborative Learning Community. In Technological Innovation for Applied AI Systems; DoCEIS 2021. IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2021; Volume 626. [Google Scholar]
  7. Hew, K.F. Determinants of success for online communities: An analysis of three communities in terms of members’ perceived professional development. Behav. Inf. Technol. 2009, 28, 433–445. [Google Scholar] [CrossRef]
  8. Wang, P.; Ramiller, N.C. Community Learning in Information Technology Innovation. MIS Q. 2009, 33, 709–734. [Google Scholar] [CrossRef]
  9. Hipp, K.K.; Huffman, J.B. Professional Learning Communities: Assessment-Development-Effects. In Proceedings of the International Congress for School Effectiveness and Improvement, Sydney, Australia, 5–8 January 2003. [Google Scholar]
  10. Trust, T. Professional Learning Networks Designed for Teacher Learning. J. Digit. Learn. Teach. Educ. 2012, 28, 133–138. [Google Scholar] [CrossRef]
  11. Wenger, E. Communities of Practice: Learning, Meaning, and Identity, 1st ed.; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  12. Habernal, I.; Daxenberger, J.; Gurevych, I. Mass Collaboration on the Web: Textual Content Analysis by Means of Natural Language Processing. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  13. Fischer, G. Exploring, Understanding, and Designing Innovative Socio-Technical Environments for Fostering and Supporting Mass Collaboration. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  14. Eimler, S.C.; Neubaum, G.; Mannsfeld, M.; Krämer, N.C. Altogether now! Mass and small group collaboration in (open) online courses—A case study. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  15. Cress, U.; Feinkohl, I.; Jirschitzka, J.; Kimmerle, J. Mass collaboration as co-evolution of cognitive and social systems. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  16. Fu, W.T. From distributed cognition to collective intelligence: Supporting cognitive search to facilitate massive collaboration. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  17. Halatchliyski, I. Theoretical and empirical analysis of networked knowledge. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  18. Morris, T.; Mietchen, D. Collaborative Structuring of Knowledge by Experts and the Public. In Proceedings of the 5th Open Knowledge Conference, London, UK, 24 April 2010. [Google Scholar]
  19. Diver, P.; Martinez, I. MOOCs as a massive research laboratory: Opportunities and challenges. Distance Educ. 2015, 36, 5–25. [Google Scholar] [CrossRef]
  20. Flanagin, A.J.; Metzger, M.J. From Encyclopædia Britannica to Wikipedia—Generational Differences in the Perceived Credibility of Online Encyclopedia Information. Inf. Commun. Soc. 2011, 14, 355–374. [Google Scholar] [CrossRef]
  21. Wikipedia. Available online: https://en.wikipedia.org/wiki/Wikipedia (accessed on 10 May 2022).
  22. Gunelius, S. Overview of Digg. Available online: https://www.thoughtco.com/overview-of-digg-3476441 (accessed on 10 May 2022).
  23. Digg: What Is Digg? Available online: https://www.wordstream.com/digg (accessed on 10 May 2022).
  24. Yahoo. Available online: https://www.yahoo.com/ (accessed on 10 May 2022).
  25. SETI@home. Available online: https://setiathome.berkeley.edu/ (accessed on 10 May 2022).
  26. Scratch. Available online: https://scratch.mit.edu/ (accessed on 10 May 2022).
  27. GalaxyZoo. Available online: https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/ (accessed on 10 May 2022).
  28. Foldit. Available online: https://fold.it/ (accessed on 10 May 2022).
  29. Wikipedia. Available online: https://en.wikipedia.org/wiki/Delphi_method (accessed on 10 May 2022).
  30. Climate CoLab. Available online: https://www.climatecolab.org/page/about (accessed on 10 May 2022).
  31. Wikipedia. Available online: https://en.wikipedia.org/wiki/Assignment_Zero (accessed on 10 May 2022).
  32. DonationCoder. Available online: https://www.donationcoder.com/help/faqs (accessed on 10 May 2022).
  33. Expert Exchange. Available online: https://go.experts-exchange.com/plansandpricing?mixedRegTrack=headerHome (accessed on 10 May 2022).
  34. Waze. Available online: https://www.waze.com/live-map/ (accessed on 10 May 2022).
  35. Makerspace. Available online: https://www.makerspaces.com/what-is-a-makerspace/ (accessed on 10 May 2022).
  36. SAP Community. Available online: https://community.sap.com/ (accessed on 10 May 2022).
  37. Camarinha-Matos, L.M.; Afsarmanesh, H. Collaborative Networks: Value creation in a knowledge society. In Proceedings of the International Conference on Programming Languages for Manufacturing (PROLAMAT’06), Shanghai, China, 14–16 June 2006. [Google Scholar] [CrossRef]
  38. Camarinha-Matos, L.M.; Afsarmanesh, H.; Ermilova, E.; Ferrada, F.; Klen, A.; Jarimo, T. ARCON reference models for collaborative networks. In Collaborative Network: Reference Modeling; Springer: Berlin/Heidelberg, Germany, 2008; pp. 83–112. [Google Scholar] [CrossRef]
  39. Ellmann, S.; Eschenbaecher, J. Collaborative Network Models: Overview and Functional Requirements. In Virtual Enterprise Integration: Technological and Organizational Perspectives; Idea Group Pub: Hershey, PA, USA, 2005; pp. 102–123. [Google Scholar] [CrossRef]
  40. Camarinha-Matos, L.M.; Afsarmanesh, H. Behavioral aspects in collaborative enterprise networks. In Proceedings of the 9th IEEE International Conference on Industrial Informatics, Lisbon, Portugal, 26–29 July 2011. [Google Scholar] [CrossRef]
  41. Esteves, J.; Curto, J.A. Risk and Benefits Behavioral Model to Assess Intentions to Adopt Big Data. Intell. Stud. Bus. 2013, 3, 37–46. [Google Scholar] [CrossRef]
  42. Li, P.C.; Lu, H.K.; Liu, S.C. Towards an Education Behavioral Intention Model for E-Learning Systems: An Extension of UTAUT. Theor. Appl. Inf. Technol. 2013, 47, 1120–1127. [Google Scholar]
  43. Leading SDG 4—Education 2030. Available online: https://en.unesco.org/themes/education2030-sdg4 (accessed on 10 May 2022).
  44. Maki, P.L. Developing an assessment plan to learn about student learning. Acad. Librariansh. 2002, 28, 8–13. [Google Scholar] [CrossRef]
  45. Harlen, W. The Quality of Learning: Assessment alternatives for primary education. In Primary Review Research Survey 3/4; Faculty of Education, University of Cambridge: Cambridge, UK, 2007. [Google Scholar]
  46. Nagowah, S.D.; Nagowah, L. Assessment Strategies to enhance Students’ Success. In Proceedings of the IASK International Conference “Teaching and Learning”, Porto, Portugal, 7–9 December 2009. [Google Scholar]
  47. Aihara, S. Assessment Indicators as a Tool of Process Monitoring, Benchmarking and Learning Outcomes Assessment: Features of Two Types Indicators. Inf. Eng. Express 2016, 2, 45–54. [Google Scholar] [CrossRef]
  48. Sae-Khow, J. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions. Turk. Online J. Educ. Technol. 2014, 13, 35–43. [Google Scholar]
  49. Scheffel, M.; Drachsler, H.; Stoyanov, S.; Specht, M. Quality Indicators for Learning Analytics. Educ. Technol. Soc. 2014, 17, 117–132. [Google Scholar]
  50. Ferreira, R.P.; Silva, J.N.; Strauhs, F.R.; Soares, A.L. Performance Management in Collaborative Networks: A Methodological Proposal. Univers. Comput. Sci. 2011, 17, 1412–1429. [Google Scholar]
  51. Dlouhá, J.; Barton, A.; Janoušková, S.; Dlouhý, J. Social learning indicators in sustainability-oriented regional learning networks. J. Clean. Prod. 2013, 49, 64–73. [Google Scholar] [CrossRef]
  52. Shavelson, R.J.; Zlatkin-Troitschanskaia, O.; Mariño, J.P. Performance indicators of learning in higher education institutions: An overview of the field. In Research Handbook on Quality, Performance and Accountability in Higher Education; Hazelkorn, E., Coates, H., McCormick, A.C., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2018. [Google Scholar]
  53. Ferreira, P.S.; Shamsuzzoha, A.H.M.; Toscano, C.; Cunha, P. Framework for performance measurement and management in a collaborative business environment. Int. J. Product. Perform. Manag. 2012, 61, 672–690. [Google Scholar] [CrossRef]
  54. Bititci, U.; Garengo, P.; Dörfler, V.; Nudurupati, S. Performance measurement: Challenges for tomorrow. Int. J. Manag. Rev. 2012, 14, 305–327. [Google Scholar] [CrossRef]
  55. Varamäki, E.; Kohtamäki, M.; Järvenpää, M.; Vuorinen, T.; Laitinen, E.K.; Sorama, K.; Wingren, T.; Vesalainen, J.; Helo, P.; Tuominen, T.; et al. A framework for network level performance measurement system in SME networks. Int. J. Netw. Virtual Organ. 2008, 3, 415–435. [Google Scholar] [CrossRef]
  56. Adel, R. Collaborative networks’ performance index. Middle East J. Manag. 2015, 2, 212–230. [Google Scholar] [CrossRef]
  57. Camarinha-Matos, L.M.; Abreu, A. Performance indicators for collaborative networks based on collaboration benefits. Prod. Plan. Control. 2007, 18, 592–609. [Google Scholar] [CrossRef]
  58. Haddadi, F.; Yaghoobi, T. Key indicators for organizational performance measurement. Manag. Sci. Lett. 2014, 4, 2021–2030. [Google Scholar] [CrossRef][Green Version]
  59. Peffers, K.; Tuunanen, T.; Gengler, C.E.; Rossi, M.; Hui, W.; Virtanen, V.; Bragge, J. The Design Science Research Process: A Model for Producing and Presenting Information Systems Research. In Proceedings of the 1st International Conference on Design Science Research in Information Systems and Technology, Claremont, CA, USA, 24–25 February 2006. [Google Scholar]
  60. Geerts, G.L. A design science research methodology and its application to accounting information systems research. Int. J. Account. Inf. Syst. 2011, 12, 142–151. [Google Scholar] [CrossRef]
  61. ed-en hub. Available online: http://edenhub.eu/ (accessed on 10 May 2022).
  62. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  63. Ulrich, H.H.; Harrer, A.; Göhnert, T.; Hecking, T. Applying Network Models and Network Analysis Techniques to the Study of Online Communities. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  64. Richardson, M.; Domingos, P. Building Large Knowledge Bases by Mass Collaboration. In Proceedings of the International Conference on Knowledge Capture—K-CAP ’03, Sanibel Island, FL, USA, 23–25 October 2003; pp. 129–137. [Google Scholar] [CrossRef]
  65. Zamiri, M.; Camarinha-Matos, L.M.; Sarraipa, J. Meta-Governance Framework to Guide the Establishment of Mass Collaborative Learning Communities. Computers 2022, 11, 12. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.