Next Article in Journal
Respiratory Rate Estimation Combining Autocorrelation Function-Based Power Spectral Feature Extraction with Gradient Boosting Algorithm
Previous Article in Journal
Activation of Tissue Reparative Processes by Glow-Type Plasma Discharges as an Integral Part of the Therapy of Decubital Ulcers
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

An Organizational and Governance Model to Support Mass Collaborative Learning Initiatives

Majid Zamiri
João Sarraipa
Luis M. Camarinha-Matos
Ricardo Jardim-Goncalves
School of Science and Technology and Center of Technology and Systems (CTS-Uninova), NOVA University of Lisbon, 2829-516 Monte de Caparica, Portugal
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8356;
Submission received: 8 July 2022 / Revised: 9 August 2022 / Accepted: 17 August 2022 / Published: 21 August 2022


Mass collaboration can bring about major transformative changes in the way people can work collectively. This emerging paradigm promises significant economic and social benefits and enhanced efficiency across a range of sectors, including learning and education. Accordingly, this article introduces, demonstrates in use, and evaluates an organizational and governance model designed to provide guidance and execution support for the implementation and operation of mass collaborative learning initiatives. The design science research process is adopted to guide the design and development of the proposed model. The model stands on three streams of work, addressing key aspects and elements that have a supporting influence on community learning: (i) identify the positive and negative factors in existing and active examples of mass collaboration; (ii) adopt contributions of collaborative networks in terms of structural and behavioral aspects; and (iii) establish adequate learning assessment indicators and metrics. The model is used for a case study in which vocational education and training meet the needs of collaborative education–enterprise approaches. Initially, the validation of the model is verified by the partners and stakeholders of a particular project in the area of education–enterprises relations to ensure that it is sufficiently appropriate for applications in a digital platform developed by such projects. The three first steps of (the proposed) applicability evaluation (adequacy, feasibility, and effectiveness) are then performed. The positive results gained from model validation and its applicability evaluation in this project indicate that not only is the model fairly adequate, feasible, and effective for applications in the developed digital platform but also that it has a high potential for utilization in supporting and directing the creation, implementation, and operation of mass collaborative learning initiatives. Although the validation was carried out in the context of a single project, in fact, it was based on a large “focus group” of experts involved in this international initiative, which is in accordance with the Design Science Research method. Thus, this article reflects a kind of applied research of a socio-technical nature, aiming to find guidelines and practical solutions to the specific issues, problems, and concerns of mass collaborative learning initiatives.

Graphical Abstract

1. Introduction

Recent trends in the contexts of information and communication technologies (ICTs), collaborative networks (CNs), and community learning (CL) have paved the way for fostering a special form of networked community, known as mass collaborative learning community, which has the potential to change the future of working life and education. The fact is that such a constructive movement has provided the possibility for plenty of scattered but interested people to participate in mass collaborative initiatives and harness their potential joint power, aiming to deal with common/complex problems that cannot be solved individually. Furthermore, this evolving phenomenon opens new doors for the public to participate in a form of vocational and informal learning practices outside the traditional education system toward promoting their knowledge, experiences, and competencies; thus, it is an effective method for lifelong learning. The growing number of mass collaborative projects is now reshaping the boundaries of social and collective actions, both locally and globally. The evolution of mass collaboration and its application to different domains, particularly in education and learning, is enabling an unlimited number of learners to develop robust hubs of resources and competencies, assisting in the search for a wide variety of potential solutions for learning challenges [1,2].
Given the above and under the general umbrella of CNs [2], mass collaborative learning (MCL) occurs “when a large number of distributed and self-directed contributors share their partial knowledge, information, data, and experiences with each other (typically via ICT platforms) in order to learn something new. In this collective action, knowledge is jointly and continually created, shared, and developed which empowers participants with the capability to increase their chances of success” [1]. In MCL, plenty of enthusiastic and autonomous learners with different minds and backgrounds come together and attempt to establish new methods and develop different scenarios for virtual collective learning that are not common in traditional methods of education. In other words, people of different nations, religions, ethnicities, and socioeconomic statuses, regardless of age and gender, voluntarily join a learning community in which they can actively contribute to lifelong learning at different levels [3]. In sum, the main specific features of MCL that should be underlined include the number of participants (mass), communication channels (digital tools), the process of interaction (collaboration), sense of being a community member (spirit), materials for the transaction (knowledge, information, and data), and common goals (learning).
Contrary to traditional learning systems in which students are typically gathered under a roof at a specific time and specific place, in which the classroom management approach and the dominant teaching style are teacher-driven, in the MCL approach, there is a shift towards online, non-hierarchical, user-driven, and innovative learning environments [1,4]. According to [1], the MCL community provides an opportunity for learners to align themselves around a shared vision that leads to creating a sense of connectedness around a common goal. Indeed, a well-developed MCL community (which represents a dynamic and democratic system) can nurture a culture of knowledge creation and exchange [2]. Additionally, some prior studies [5,6] have provided evidence that there are several associated characteristics of any MCL community that include but are not limited to the following:
  • It allows the interaction of broad groups of heterogeneous people (who might be dispersed through time and space) in diverse ways, and they can reap the power of collaboration.
  • It connects the systems, organizations, and people that are keen to work and learn across boundaries.
  • It leads to public engagement in a casual and research study toward learning favorite subjects.
  • It enables learners to share knowledge and experiences and learn from each other, thereby promoting their capabilities in obtaining rapid yet significant progress.
  • It facilitates fast cycle learning toward impact at scale.
  • It increases global interactions with experts and peers.
  • It makes the process of learning relatively creative, cost-effective, and flexible.
  • It provides an easily accessible public digital data repository for all its members.
It should be also added that the effective use of ICT tools in MCL communities does not only make learning more interactive and easier but also enhances the modes of communication. Despite the tremendous progress in this area, notable achievements, and positive results that MCL has obtained over the years, it still faces a set of challenges [1,3,6]:
  • There is insufficient evidence about the successful application of MCL in various fields.
  • The concept, organizational structure, and associated mechanism of MCL are still evolving.
  • The practice of mass collaboration in the learning community is not clearly formalized.
  • There are some ambiguities about the key strategies for stimulating people to join the community and to keep them motivated in providing contributions.
As a result, this promising and complementary approach to learning still appears similar to a brand-new way of looking at a study that has not yet been revealed to everyone around the world.
On top of that, the authors believe that neither the main components, dimensions, and features of MCL have been explicitly identified and explained, nor they have been all adequately assessed. Hence, the attention in this study is given to dealing with these specific issues by considering the fact that the MCL is still evolving in the education context and there are several different views and controversial assumptions about this subject. For instance, different researchers have different (but nearly close) understandings of the concept. The fundamental issues such as the principles, boundaries, formation, processes, and development of MCL communities are still vague. More specifically, it is extremely complicated for developers to apply MCL to different fields successfully and effectively. Certainly, every field or environment has its own demands and circumstances that need to be taken into account in advance [4]. All gathered evidence shows that this area of knowledge and research is still in its infancy but is growing up gradually and surely. Therefore, it requires in-depth reviews and further research, inquiries, analysis, and contributions to provide better clarifications.
Given this context, the motives for conducting this study are the following:
  • Filling part of the gap mentioned above;
  • Identifying the influential factors that support the creation, operation, and development of MCL initiatives;
  • Gaining some insights into the main features of an MCL community.
Therefore, the key research question that emerges is as follows:
What could be an effective way of supporting community learning through mass collaboration?
In that account, the main contribution of this study is to propose an organizational and governance model for MCL initiatives (OGM-MCL) by relying on a design science research method that serves to clarify how community learning can be supported by MC.
The proposed hypothesis to address this research question is as follows:
Community learning can be effectively supported through mass collaboration if three streams of work are appropriately rooted in the foundation of a community: these include (I) identifying the positive and negative factors in existing and emerging successful examples of MC; (II) adopting contributions from collaborative networks in terms of structural and behavioral models; and (III) establishing adequate learning and performance assessment indicators and metrics.
The main contribution of this work is introducing the OGM-MCL designed to steer and support the process of the implementation, operation, and development of MCL initiatives. This paper neither is a policy paper in the strict semantics of “policy” nor does it intend to present an algorithm or software development. It is rather a kind of applied research paper and a contribution of a socio-organizational-technical nature, including an organizational and governance model, and the main aim is to provide guidance to the developers of new MCL initiatives.
The remainder of this article is structured as follows: related works are briefly reviewed in Section 2; in Section 3, the research method used for this study is explained at first, which steers both the structure of the paper and the method of presenting the contributions. Then, the proposed OGM-MCL is presented in detail. In Section 4, the approaches for demonstrating and testing the applicability of the OGM-MCL on an EU project/case study are explained. In Section 5, the process, method, and instrument used for evaluating the (validity) of OGM-MCL are described. Finally, Section 6 provides some discussion about the findings of this study and briefly looks into possible future studies.

2. A Brief Overview of Related Work

Since the beginning of the 21st century, much effort has been placed into transforming the methods of learning with a focus on individual learning as part of a community. Online learning communities are now emerging in educational settings [7]. The term learning community generally refers to a group of people with shared learning goals who collaborate with one another [8]. Learning communities can take different types and appear in different forms, for example, “professional learning community” (based on a collaborative group of educators that work together to create new knowledge constantly with the aim of putting it into practice) [9]; “professional learning network” (a group created by professional educators that supports formal and informal learning processes through a learning network) [10]; “community of practice” (a voluntary group of people who have a common interest in a specific domain and come together to share best practices with each other) [11]. MCL, as a kind of online learning community, takes place when a large enough number of distributed autonomous participants work together or in parallel on a single project and share their resources and commonalities to solve a complex problem that is often considered insoluble and/or is beyond one’s ability and that needs the confluence of different contributions from a variety of backgrounds. Such collaboration is typically mediated by the contents or objects being created (a kind of stigmergy) and occurs mostly over the Internet, using social software and computer-supported collaboration tools (but not always).
The fact is that MCL as an emerging approach is viewed from different perspectives (e.g., computer science, computational linguistics, network science, psychology, pedagogy, economics, knowledge management, and collaborative learning), and different researchers have different viewpoints about this new research field [12]. On the other side, the interdisciplinary nature of MCL coupled with the complexity of such a system would necessitate further investigation and broader collaboration among researchers toward acquiring a clearer and deeper understanding of the concept and related issues. Therefore, to understand the concept, mechanism, and process of MCL, for example, the authors of [6] focused on organizational structure; [3] relied on cognitive, sociocultural, and systemic frameworks; and [13] concentrated on theoretical frameworks and socio-technical environments. In the context of MCL, learning (as the common goal of participants) is considered at different levels, namely, individual [14] collective [15], or both [16]. Moreover, MCL addresses the concept of “networked knowledge” and also the way in which knowledge processes (knowledge acquisition, knowledge creation, knowledge exchange, and knowledge development) are considered in a community of learners [17]. Such important information about the MCL (gained through various theoretical and empirical research) has helped us better understand, describe, and design MCL initiatives.
In line with this background, in order to gain an improved understanding of the research trends and recent advances in the three-abovementioned workstreams, we briefly and separately review the related literature in this section.

2.1. Identifying the Positive and Negative Factors in Existing and Emerging Examples of Mass Collaboration

The Internet provides an opportunity for millions of people to contribute to collaborative scenarios—like never before—namely to achieve unprecedented improvements in education and learning. Over the last years, MC has made a profound impact on societies in the areas where it was applied [3]. In the context of education and learning, for example, the Massive Open Online Courses (MOOCs) initiative in recent years has provided free online courses available for anyone to enroll, with no limit on attendance [12]. Citizendium [18] is another example in which volunteer contributors take part in an open wiki project to dedicate themselves to creating a free, comprehensive, and reliable repository of structured knowledge under gentle expert oversight.
Recently, a few studies [5,19] have attempted to find how such mass collaborative environments are built up, coordinated, and developed. Some studies, such as [20], highlight the need for more clarification on the main processes and functions that should be considered in mass collaborative projects. It is essential to have a strong and common understanding of the principles that drive a large number of individuals to work collectively and achieve things that were previously unimaginable. Moreover, it is important to identify the unique characteristics and main features that set MC apart from other kinds of collective actions. With the intention of obtaining a better overview of these issues, we developed an analysis of some representative cases of MC. For that, 15 case studies of MC from different contexts are identified and their positive and negative factors are then listed in the following. These factors are identified and assessed from the MCL point of view and are used for the purpose of community learning in MCL initiatives:
  • Wikipedia—a web-based, free-content, and Internet-based encyclopedia written and maintained by a community of volunteer editors in which users can freely share their knowledge [21]:
    Positive factors—it is free and contributed by volunteers. It is open access and easy inclusion, and anyone can participate. Users can play different roles and perform different tasks. It has no power hierarchy. Users are treated (almost) equally. Articles are continuously developed, updated, and checked. Consensus can be reached through friendly and open discussion.
    Negative factors—Wikipedia editors are anonymous. The quantity or frequency of contributions is not controlled. Not all content will be accurate. The scientific level of articles varies. Contents are not free from bias. Anyone can vandalize the articles. Some users might have fake credentials.
  • Digg—a social news website that enables users to submit their interesting stories to be selected and voted (“Digg” or “Bury”) by other users [22,23]:
    Positive factors—it is a user-driven website, and it is open to anybody. It has easy inclusion. the login is mandatory, and users need to create a Digg user account. Users are volunteers and they can play different roles and participate in different tasks. Users can add friends and develop their relationships. Users’ information and contributions are associated with their Digg profile. Stories are classified into different groups based on topics; good stories will be promoted. Contents are checked by the system. Digg raises capital from investors.
    Negative factors—there is no editorial control on submissions. An influential group of users can affect information credibility by using promotions, burying information, and votes. Users cannot share their opinions because Digg lacks commenting features on the website.
  • Yahoo! Answers—this was a Q&A platform or knowledge market that allowed users to ask questions on any topic and/or answer others’ questions by sharing facts, opinions, and experiences [24] (Yahoo! Answers was shut down on 4 May 2021):
    Positive factors—it was an open learning community, available in 12 languages, and open to all. Users could connect, share info, add comments, ask questions, answer others’ questions, and/or vote. There were some categories with multiple sub-categories for organizing questions. There was a “Point System” (scoring) and a “Voting System”. Users could receive a “badge” under their name, e.g., naming them as a “Top Contributor”. Staff could reach different levels of authority and site access. Supportive users were featured on the Yahoo! Answers Blog. The “user moderation system” handled its misuses. Posts could be detached if they received a sufficient negative weight. It was supported by funds and financial aides. Tt provided diverse supportive services.
    Negative factors—users could use any name and photo for opening an account. There was no system to filter the incorrect answers. There were improper grammar and incorrect spelling in answers. Once the “best answer” was chosen, there was no chance to add more answers nor was there space for improvement.
  • SETI@home—a computing project and scientific experiment that allows anyone with a computer and an Internet connection to search for signs of extraterrestrial intelligence [25]:
    Positive factors—it is open to anybody. It features easy inclusion. Participants are volunteers and can build a team and make competitions. It has a “Voting System” to determine the validity of the results. Its “Credit System” can monitor how much work was performed. It can raise financial donations.
    Negative factors—the risk of cheating (for gaining credit) is high. Some participants might misuse the resources of the projects to gain work-unit results. The projects cannot share their resources.
  • Scratch—an online community and free programming language tool that allows users to create their own stories, animations, games, art, and music [26]:
    Positive factors—it is open to anybody and available in 70+ languages. It can be used in different settings: schools, libraries, community centers, museums, and homes. Users can ask questions, share their creative ideas, stories, and projects, obtain feedback, and collaborate with others. If something breaks the community’s rules, Scratch will take the corresponding action (e.g., sends a warning to the account, removes it, or blocks the account).
    Negative factors—without creating an account, users can make contributions (e.g., create their own projects, read, and upload comments). Users can create several accounts.
  • Galaxy Zoo—a citizen science and crowdsourced astronomy project that invites people to help classify the morphology of more than a million galaxies [27]:
    Positive factors—it has easy inclusion. Users are volunteers and creating a user account is necessary. The username is associated with the user’s contributions. It uses computer technologies and human intelligence for the classification of galaxies. It monitors and analyses some of the contributions and transactions. Information is stored in a secured database. It uses “Amazon Web Services” to rapidly serve the website to a large number of people. It raises funds.
    Negative factors—using the real name is not necessary for registration. Personal information cannot be completely removed from the system. The classification system cannot provide feedback about the process of classification.
  • Foldit—a crowdsourcing computer game and protein-folding puzzle for which its solutions help scientists in targeting and eradicating diseases and in creating biological innovations [28]:
    Positive factors—it is open to all. It has easy inclusion, engaging the general public and scientific teams in online research. Players can use the Foldit forum for collaborations, e.g., to train new players. It relies on human-computer interaction. It has a “Ranking and Awarding System”. The website records, monitors, and stores the posts and interactions. It publishes all-important scientific discoveries. The results can be used in scientific publications. It benefits from grants.
    Negative factors—players can play without an account, so there are many anonymous identifiers in the community. It is not easy to learn and play Foldit. Playing Foldit needs a reasonably powerful computer.
  • Applications of the Delphi method—a systematic and qualitative method that evaluates the results of multiple rounds of questionnaires sent to a panel of experts. This method can be used to estimate the likelihood and outcome of future events [29]:
    Positive factors—there are different types of Delphi. Each panel will be selected and invited. The experts can discuss or comment on others’ forecasts, and all the experts and their forecasts are given equal weight. It can be applied in several different fields of science. It can raise funds.
    Negative factors—the potential experts might not agree or be available for participation. The method is not able to make complex forecasts with multiple factors. The response times might take several days or weeks.
  • Climate Colab—an open problem-solving platform that harnesses the collective intelligence of thousands of people to find solutions for global climate change [30]:
    Positive factors—it benefits from the contribution of experts and crowds, and it has easy inclusion. Users are volunteers and can play different roles and perform different tasks. Users can collaborate on the platform with whoever is interested in similar topics. Users can comment on others’ proposals. It has a “Voting System”, “Rewarding System”, “Messaging System “, and “expert advisory board”. On the website, there is a list of community members and their points, roles, activities, and membership date. It raises funds and financial support.
    Negative factors—it must continuously identify, invite, and maintain a large number of different experts. It uses a top-down approach in the community.
  • Assignment Zero—an experiment in crowd-sourced journalism that enables professionals and amateurs to engage in collaborative reporting [31]:
    Positive factors—it is open to all. Users are volunteers. Users must create a user account by providing their real full name and a valid email address. There is a list of tasks that users can perform. Users can contribute to different topics. Users are encouraged to make themselves known to the public by providing their biography. It gives credit to the contributions. It is supported by funds.
    Negative factors—users might produce and share stories recognized as useless; interviews often take place face-to-face, so the candidates must live close to the interviewee.
  • DonationCoder—an online community of programmers and developers that help people by organizing and financing software development [32]:
    Positive factors—it provides free tools and services. Registration needs a valid email address. There are different forms of communication. All users are considered equal. It benefits from grants and donations.
    Negative factors—users can sign up at the website by using different emails and names. Some sections of the website are available only to donators. For participation in the forum, participants are required to first donate and then they receive the license key; they then register a forum account and finally upgrade their forum account. The contracting and consulting services are not cheap.
  • Experts Exchange (EE)—a trusted community of technology experts that help people solve their technology problems [33]:
    Positive factors—users must register with an accurate email address. Users are not allowed to have more than one account. Users are volunteers. EE covers over 230 tech topics and prioritizes the contents based on usefulness. Users can receive recognition and secure credentials with “Credly” (a digital badge platform that provides digital credentials to individuals through working with credible organizations). EE provides a variety of professional training courses on a wide variety of topics, and it produces various video tutorials.
    Negative factors—EE provides answers only via paid mode. If a user account is a past due, EE might cancel the account for non-payment.
  • Waze—a community-driven GPS and navigational app that connects drivers to one another, allowing them to work together to find directions and avoid traffic jams [34]:
    Positive factors—it is a user-generated community. It is free to download and can be used anywhere. It relies on crowdsourced information. Users need registration. Users can connect and work together. It offers points to users. Advertising is the main source of generating revenue.
    Negative factors—using Waze needs enough initial and active users to collectively create the local maps and continuously update data to make it useful. A very limited number of countries (13) have a full base map; in others, either the map is incomplete or not yet used. Waze currently supports only private cars and not public transportation, bicycle, or trucks.
  • Makerspaces—a collaborative workspace in which people have access to different resources that enable them to explore, research, learn, and create products and services [35]:
    Positive factors—it is member-driven. It can take different forms (physical amd virtual), shapes, and sizes for different purposes. Most Makerspaces need registration. Users are people with common interests. Users can meet, socialize, collaborate (on projects), co-create, learn new skills, share, research, explore and invent prototypes, solve problems, play, and even boost self-confidence. It benefits from funds and financial support.
    Negative factors—some Makerspaces have membership fees. Physical Makerspaces have been criticized for their high costs associated with tools and materials.
  • SAP Community Network—an open, online, and collaborative community of software users, developers, consultants, mentors, and students who use the network to ask for help, share ideas, learn, innovate, and connect with others [36]:
    Positive factors—it serves as a resource repository and a platform for SAP users to collaborate with each other. Software users, developers, consultants, mentors, and students use it as it is open to all. Users are volunteers. It offers/hosts discussion forums, tutorials, expert blogs, SAP code sharing gallery, utilities, technical library, wiki, article downloads, e-learning catalogs, and other facilities through which users can contribute their knowledge. It has its own channel on YouTube. Its users’ knowledge contribution to the community can be quantified. It has a contributor recognition program (CRP) that awards points to community users for contributions. SAP publicly recognizes its most active contributors. It has over 430 spaces (sub-groups).
    Negative factors—knowledge flows are not measurable. The questions asked before are not easily accessible. It is impossible to read the list of problems in the scope of the theme. There is no control to navigate to the blogs section directly. It is difficult to find the important and most liked blogs.
From the list above, some positive and relevant features of the MC cases are selected, appropriately adapted, and then used as input to OGM-MCL. The features that are considered negative (from the MCL point of view) highlight the red lines that alert decision-makers to their immediate or long-term risks for the MCL community.

2.2. Adopting Contributions from Collaborative Networks in Terms of Structural and Behavioral Models

The concept of CNs, which emerged and consolidated as a discipline in the last decades, has influenced the political, social, and economic situations globally. CNs have been applied in different industries, governmental and non-governmental social organizations, and service sectors to increase the organizations’ survival and value creation capability in a period of turbulent socio-economic changes. In response to the scientific and societal needs and expectations, a variety of collaborative network forms emerged and developed both in industry and services such as virtual organizations, virtual enterprises, dynamic virtual organizations, collaborative engineering, professional virtual communities, business ecosystems, etc. Additionally, in this large scope and multi-disciplinary area, many projects and a growing number of practical cases have been carried out worldwide. These developments led to an extensive amount of empirical base knowledge that now requires urgent leverage to support sustainable development advances [37]. For example, in this direction, the identification of required structural and behavioral patterns and models that steer the establishment and development of specific CNs is deemed necessary.
Previous studies [38,39] reported that the organizational and governance models can support the development of the CNs through the following:
  • Synthesizing and organizing the base concepts and key principles of the network;
  • Representing the entities, relations, and communication channels of the network;
  • Defining the recommended functions, processes, and practices to be performed at different levels of the network;
  • Representing the component of the network;
  • Addressing the governance problems that should be solved.
On top of them, organizational and governance models can describe the structure of the objects that constitute the network and identify the mechanisms that predict the outcomes of interactions. Coupled with the structural aspects, the success and sustainability of CNs require considering the affecting behavioral aspects. A behavioral model focuses on the dynamic behavior of a CN along its lifecycle. This model is used to better understand and predict the actions at both individual and network levels [6,40]. The following should be added:
  • At the individual level, some specific behavioral features are at the center of attention, these include personal characteristics (e.g., participant’s personality, cognitive abilities, competencies, perceived usefulness, and perceived ease of use of the supporting platform) and social characteristics (social incentive/pressure, attitudes towards collaboration, collaboration willingness, and readiness).
  • At the network/community level, some behavioral aspects—in parallel with the individual level—should be taken into account (for example, value systems, feedback systems, rewarding systems, trust-building, preparedness, conflict resolution, and partnership development) [1,40,41,42].
At this point, it is worth noting that “there is a dual relationship between the behavioral aspects and the governance of the CNs: on one hand, the CN governance constraints and/or directs collective and individual behavior of network members, while on the other hand driving forces behind the members’ behavior (e.g., their value systems and character) influence the CN’s governance” [40].
Finally, it is important to highlight that the creation, operationalization, and development of CNs can be a relatively time-consuming and costly process. To deal with these issues and to facilitate the related process, the adoption of an appropriate CN reference model is a promising approach. One example is ARCON [38]. Such a reference model can provide a high-level integrated view of a CN from different perspectives. Thus, the ARCON reference model is adapted and used in OGM-MCL.

2.3. Establishing Adequate Learning and Performance Assessment Indicators and Metrics

According to the UN agenda for the sustainable development goals 2030, namely regarding education [43], learning is at the heart of global education and development, and it can empower individuals with values, skills, and knowledge to live in dignity, build a better life, and enhance society overall. To improve learning outcomes, it is important to know what the learners are learning, what they need to learn, and how to assess their achievements.
Learning assessments have increasingly gained prominence in education practice as it reveals the strengths and weaknesses in the process of learning [6]. Assessments can be used as a helpful means for decision-makers and developers (e.g., education system, employers, families, ministries, communities, civil society groups, and donors) to diagnose, plan, monitor, modify, and improve learning activities. The results of the assessment can also help instructors and planners in making better decisions on how to improve instruction and pedagogy; design and improve education programs and curricula; and allocate sufficient resources [44]. To measure learning (program, progress, and outcomes) appropriately, a review of the literature demonstrates that there is a wide range of useful strategies and methods that each can be used alone or in combination (depending on the objectives of the assessment). These include but are not limited to formative assessment, summative assessment, diagnostic assessment, continuous assessment, norm-directed assessment, criterion-directed assessment, subjective assessment, objective assessment, open-book assessment, practical assessment, oral assessment, process assessment, peer assessment, formal and informal assessments, and self-assessment [45,46].
A well-designed learning assessment method should stand on well-formulated indicators. An indicator is a specific, achievable, and measurable value and checkpoint that can be used to show the changes or progress at different levels of learning processes and at different points in time. Indicators help us determine whether or not the learning program objectives have been achieved. Furthermore, during the process of designing, modifying, and developing a learning program or curriculum, various types of indicators can be considered, which are suited to the needs of the plan from the stakeholders’ point of view [47].
Prior studies [48,49] indicate that in each learning assessment method, a set of specified indicators might be used to collect the needed information. The selected indicators should be focused, clear, and specific. There is, however, no simple step-by-step process for choosing the right indicators for a mass collaborative learning network. Selecting the appropriate indicators depends on the network’s unique circumstances and the changes the network or community is working towards. Furthermore, there are additional considerations that should be taken into account, such as the type of network/community, the network management model, the objectives of evaluation, the context and level in which the assessment occurs, the data sources available, the time horizon of evaluation, the frequency of data collection, etc. [50]. Previous research studies [51,52], however, suggest a set of learning assessment indicators that demonstrate how well the learning process works at different levels. These indicators include learning environment and climate, leadership quality, socio-economic conditions, teaching quality, network objectives, network characteristics, the status of the teaching profession, the number of learners, learners’ cultural aspects, the number of learning approaches, time on task, and innovative aspects of learning, etc.
In line with earlier studies and investigations, we also propose some indicators for learning assessment based on the general objectives and conditions of MCL initiatives. To help better understand the role and features of the proposed indicators in OGM-MCL, we place them under four main classes:
  • Input indicators—which refer to the resources required for implementing the learning program (e.g., enough financial support, equipped environment, and staff);
  • Activity indicators—which refer to the activities and operations in the learning program (e.g., motivation factors and active collaboration);
  • Output indicators—which refer to the expected effects/changes achieved by the learning program in the short, intermediate, and long term (e.g., improved knowledge and developed relationships);
  • Impact indicators—which refer to the learning program’s contribution to higher-level strategic plans (e.g., civic awareness and promotion of collaborative learning).
To make better judgments about the success of an MCL initiative, in addition to learning assessment at the individual level, the performance of the network/community (as a whole) needs to be measured.
In general terms, the performance assessment has several meanings and is recognized as an extremely complex task because it simultaneously involves both tangible aspects (e.g., technological and human resources) and intangible aspects (e.g., network culture, users’ interests, and trust) [50]. In the scope of this study, performance assessment is understood as a process that aims at achieving the goals of the collaborative learning initiative by improving the quality of operations and executions. In the literature, not only abundant concrete experiences but also several methods for performance evaluation have been reported, whereas each one can be used for a specific condition and purpose of the MCL initiative. According to [1], the methods for performance assessment fall into three main categories, namely, absolute standard methods (e.g., checklist and critical incident methods), relative standard methods (e.g., ranking methods and paired comparison), and objective methods (e.g., management by objectives and 360° appraisal).
Evidence shows that there is good progress in the quality and quantity of introduced methods for performance assessment [53], and the identification of potential dimensions and related indicators has brought about quantitative measurements or qualitative observations that help describe changes [49]. For instance, [54] declares that there are three measurements that (as key dimensions) can affect the prosperity of a CN, namely, inputs (resources) to the collaboration, health of collaboration (factors such as trust and commitment), and outcomes of the collaboration. In another study [55], the performance assessment is viewed as a key success factor for the CN that relies on six dimensions: network models of action, network resources and competencies, network culture, financial perspectives, customer perspectives, and the performance of internal processes. In [56], the authors assert that the multi-dimensional performance facets of collaborative networks involve three dimensions (structural, relational, and behavioral), and each dimension has two associated indicators (type and governance, trust and commitment, and individual and collective levels); altogether, they can influence the success of a CN.
Complementarily, findings of earlier studies [50] reveal that performance assessment builds upon a set of critical indicators that will be designated according to the network’s structure and typology, network governance and power, performance assessment objectives, assessment timeframe, assessment level, data gathering frequency, criteria choice, etc. Indeed, identifying and selecting the appropriate indicators for performance assessment are broad endeavors and should not be limited to the consideration of tangible or intangible benefits. Rather, it should cover a spectrum of features that meets strategic, operational, technical, financial, and cultural aspects. There is also an increasing need for identifying different types of measures and indicators to quantify the efficiency and effectiveness of CN. In [56], it is claimed that “the network performance is assessed throughout examining the effectiveness from the point of view of members’ coordinated activities and collaborative practices as well as the efficiency of reducing wastes, costs, and risks associated with these activities”.
It should be noted, however, that both the specified method for performance assessment and the selected indicators (applicable for a CN) can rarely be used unchanged in other cases [57]. In other words, each MCL initiative for evaluating the amount of utility and desirability of its activities requires defining some specific performance indicators based on network conditions. Well-designed performance indicators should make appropriate links among strategy, execution, and ultimate value creation [58]. Given the above-mentioned issues and taking into account the basic conditions and requirements of a general MCL initiative, we picked up nine dimensions (among others) that are very important for collaboration as well as several related indicators to be used for performance assessment in OGM-MCL, as presented in the following section.

3. Proposed Model

This section presents our proposed model, but the research method used for this work is initially explained.
For the purpose of this work, the design science research process (DSRP) approach [59,60] is adapted. DSRP is a conceptual process aimed at (a) using a systematic process for conducting (design science) research studies, (b) making the presentation of the research study clear and more understandable, (c) building the study upon prior related literature, and (d) providing a mental model for structuring the research outputs. DSRP paradigm has its roots in engineering and is basically used for the purpose of problem solving and artifact designs. DSRP aims to develop organizational and individual capabilities by the creation of innovative artifacts and the generation of design knowledge. More specifically, the DSRP helped the authors of this study to (a) incorporate principles, practices, and procedures required to carry out the research, and (b) contribute to solving (theoretically and practically) real-world problems through generating, designing, and deploying prescriptive knowledge and novel solutions (proposed model) for the identified problems. Consequently, the generated knowledge includes detailed information about the solutions and provides evidence that shows how the solutions can be effectively used in practice to satisfy the needs of the people dealing with the problems.
DSRP involves six conceptual steps, as shown in Figure 1.
We used DSRP to help design, develop, and validate the proposed mass collaborative learning organizational and governance model (OGM-MCL). This process was performed by following the six above-mentioned steps that are briefly explained in the following:
  • Problem identification and motivation—as pointed out in the introduction, there is insufficient organized information about the structure, main components, dimensions, and features of MCL initiatives. Thus, more input is needed that provides a basis for supporting community learning through mass collaboration. The contributions from this step focus on (1) identifying the main features and specifications of the MCL community, (2) addressing the structural and behavioral aspects of the MCL community, and (3) finding potential indicators that help assess the learning and performance of individuals and community.
  • Definition of the objectives of the solution—this step defines the objectives of the research being conducted and indicates the intended solutions. The objectives of this study are defined and explained in Section 1 and Section 2. In line with this, an extensive literature review was conducted to understand the background of the area and to consolidate what is already known about the concerned issues. In performing this analysis, the principles and models of collaborative networks were used as an underlying framework. Such a review helped us gain a better view of the existing knowledge, latest developments, and current solutions for the identified problems. Consequently, a model is proposed as the main objective, which intends to provide helpful guidelines and directions for supporting and developing the MCL initiatives.
  • Design and development—in response to the identified problems (mentioned in step 1), we proposed OGM-MCL (presented in Section 3.1). OGM-MCL is the main contribution of this study, which is inspired by the contributions and solutions reported in the literature (and partially mentioned in Section 2) in combination with our background knowledge and experience. The OGM-MCL comprises (a) three streams of work corresponding to the organizational part and (b) three phases of evaluation, representing the governance part (see Figure 2).
  • Demonstration—this step demonstrates the efficacy of the OGM-MCL in solving the problems described in the Introduction. Generally, the demonstration can be fulfilled through, for example, experimentation, simulation, application to a case study, proof, an illustration, or a project. In this study, the demonstration was taken place on an EU project. This step is rendered in Section 4.
  • Evaluation—the evaluation step compares the objectives of the solution to the actual observed results from the use of the OGM-MCL in the demonstration. In this step (that is presented in Section 5), we observe and measure to what extent the OGM-MCL can achieve its goals in the used case project.
  • Communication—the inputs and outputs of this study (including the identified problems and their importance, the proposed model, the design and development processes, and the assessment of the OGM-MCL) are shared with others through publications.

3.1. The Proposed Organizational and Governance Model for Mass Collaborative Learning

The proposed OGM-MCL in this study is a general organizational and governance model that intends to provide practical guidelines (in some ways) for researchers, designers, and developers working in this realm. However, for the application of OGM-MCL in any specific MCL initiative (e.g., farming and agricultural community, and online consultation service), the model should be appropriately adapted according to the specific objectives, requirements, and circumstances of the initiative.
Taking into account the principles and guidelines of DSRP, we proceeded to the design and development of OGM-MCL (step 3). As observed in Figure 2, OGM-MCL encompasses two main parts:
  • Organizational part—which embraces three streams of characterization work:
    (Stream 1) main features of the MCL community—in this stream, the eight main dimensions of collaboration and also their related factors and features are addressed. These dimensions are suggested by considering the types and classification of the factors and features of the 15 case studies of MC mentioned above.
    (Stream 2) adapting structural and behavioral models—in this stream, the proposed structural and behavioral models along with their components are introduced. They can align and relate different parts of the MCL communities and bring about consistency.
    (Stream 3) learning and performance assessment indicators—in this stream, some important indicators are selected that can be used for assessing the learning of members and the performance of the MCL community. Such assessment indicators can help determine whether or not the objectives of the community have been achieved.
  • Governance part (or evaluation process)—which contains three main evaluation steps namely, adequacy, feasibility, and effectiveness. Through these steps, the three streams of the organizational part are evaluated from the adequacy, feasibility, and effectiveness point of view. This evaluation shows to what extent the addressed elements in each stream could be adapted and applied to a concrete case of MCL. More detailed information about the governance part is presented in Section 4.1.
The OGM-MCL is illustrated in Figure 2. It is a dynamic model by nature and its components and elements can be changed dynamically based on the objectives, requirements, and circumstances of the use case. OGM-MCL intends to represent some essential elements and features that should be considered in the creation, operation, and implementation of MCL initiatives and communities. Thus, in this direction, the OGM-MCL provides some guidelines and support for the decision-makers, designers, developers, and researchers.
Figure 2. Proposed OGM-MCL.
Figure 2. Proposed OGM-MCL.
Applsci 12 08356 g002
The following Section demonstrates the efficacy and application of the OGM-MCL to an EU project.

4. Demonstration

This section presents the fourth step of the DSRP method and demonstrates the application and validation of OGM-MCL in a practical manner.

4.1. Governance Process

The proposed governance process (shown as the “governance part” in Figure 2) reflects the interrelated factors, practices, relationships, and other influences on the community. The governance process from one side helps orchestrate the process of OGM-MCL demonstration, evaluation, development, and validation. On the other side, it assists in the modification and development of the specified functions of the initiative toward operation optimization. The proposed governance process represents a set of phases and steps through which several tasks will be performed to assure the effective implementation of OGM-MCL in a concrete case of MCL. The governance process consists of three phases (specification, implementation, and exploration) and eight related steps that are briefly explained in the following:
  • Specification phase—encompasses steps 1 to 6. It refers to the identification, selection, and documentation of the specific objectives, dimensions, requirements, and functions to be considered for a specific MCL initiative:
    (Step 1)—Creating a list of the initiative’s objectives and outcomes required—these objectives indicate what the MCL initiative wants to achieve through applying the OGM-MCL. This task can be performed by the decision makers and developers in the initiative.
    (Step 2)—Identifying and selecting the potential items (factors, features, and elements) that can be considered for the creation and development of the MCL initiative—the potential items can be identified by reviewing and analyzing the related literature, examples, cases, etc. The selected items should be then customized based on the specific objectives, requirements, and functions of the MCL initiative.
    (Step 3)—Defining the main functions of the MCL initiative—the functions refer to actions’ execution and transactions of the MCL initiative. Considering the objectives and requirements of the MCL initiative, different functions can be defined by decision makers and developers.
    (Step 4)—Evaluating the “adequacy” of both the selected items and the defined functions—this step first evaluates whether or not the selected items can reasonably and adequately meet the objectives of the MCL initiative. Similarly, the adequacy of the defined functions can be collaboratively evaluated in relation to the objectives of the MCL initiative.
    (Step 5)—Evaluating the “feasibility” of both the selected items and the defined functions—the first part of this step of evaluation tries to uncover the strengths and weaknesses of the selected items from the feasibility point of view. The feasibility of the items can be assessed by considering the technical capabilities of the MCL initiative and the budget available for implementing them. The feasibility of the functions is pertinent to adjusting the number of functions that could be possibly performed by the MCL initiative and developing their descriptions.
    (Step 6)—Evaluating the “effectiveness” of both the selected items and the defined functions—this step of evaluation first evaluates the effectiveness of selected items, aiming at reducing the number of wasted resources that are used for developing the OGM-MCL as well as reaching the desired results. The effectiveness of defined functions can be assessed through, for example, some round-group discussions or questionnaires by the decision makers and developers.
  • Implementation phase—embraces step 7. It focuses on designing, developing, and implementing the assets (the selected items and the defined functions):
    (Step 7)—deals with (a) making the desired changes and justifications on selected items and then realizing and designing them, and (b) implementing (e.g., programming) the defined functions to make the functions/services available for users.
  • Exploration phase: includes the last step (8) of the governance/evaluation process. It takes care of the operation and function of the MCL initiative and the efficiency of its performance at different levels.
    (Step 8)—oversees the operation of the MCL initiative and it helps make it activated and operative as a service. When the MCL initiative worked for a certain period, its efficiency should then be evaluated, for example, by taking the same procedures used in previous steps. In this step, if any problem related to different parts of the MCL initiative is detected (indicating that the MCL initiative does not work efficiently as expected), it should then be re-evaluated collaboratively.
The proposed governance process is depicted in Figure 3.
To demonstrate the success of OGM-MCL in solving the identified problems (mentioned above), we used the opportunity to instantiate the OGM-MCL in the ED-EN HUB project.
ED-EN HUB [61] is an Erasmus+ project co-financed by the European Union and developed by a consortium of eight institutions from five different European countries. The ED-EN HUB project aims at improving the quality of education (focusing but not limiting itself to vocational education and training) through the consolidation and systematization of the education–enterprise (in which the educational institutes and enterprises collaboratively work on knowledge and competencies development). This international cooperation alliance also focuses on the development of tools and methodologies needed for the creation of synergies between educational institutions and enterprises (at local, national, and international levels).
ED-EN HUB intends to build up a collaboration platform (EDENCP) that provides a supported training and learning environment/hub. EDENCP enables an unlimited number of distributed trainers and learners from different backgrounds to come together and build a long-term collaboration. The participants in the hub attempt to adopt new methods and develop more scenarios for sharing their knowledge, experiences, and ideas, which can help promote their level of education, qualifications, and competencies.
Figure 4 provides a picture of the approach used in our study to demonstrate the applicability and efficacy of OGM-MCL to EDENCP development. This application helped us from one side to (a) drive and support the process of implementation, operation, and management of EDENCP, and (b) modify and develop the specified EDENCP (system) functions. On the other side, (c) it improves and develops the OGM-MCL based on the provided feedback by project partners and the developers of EDENCP and (d) helps move toward OGM-MCL (model) partial validation.
The following subsections explain and clarify the approaches and instruments used for evaluating the applicability of OGM-MCL to the ED-EN HUB project through the proposed governance process.

4.1.1. Creating a List of the EDENCP’s Objectives (Step 1)

In the first step, the following five objectives are collaboratively created (by the project partners and stakeholders) to help in setting the goals in a way that all EDENCP activities converge in one single direction:
  • (Objective 1)—determining skills requirements that address the main general and specific skills as well as transversal and transferable competencies that are applicable to both initial and continuing education (based on the principle of education industry cooperation);
  • (Objective 2)—co-designing, developing, and training—which provides guidelines and resources that can be used in training events for either people from the educational side who want to implement education–enterprise actions or for people from the business side who need to reinforce their links with the educational system;
  • (Objective 3)—detecting, assessing, and clarifying (policy recommendations)—which represents the synthesis of the experiences developed during the project in the five regions targeted by ED-EN HUB. The recommendations will be drafted by classifying and comparing the main policy objectives to which the creation of the joint structure is associated, the range of developed functions and concrete results achieved, difficulties in the start-up, funding and collaboration models, solutions found in consolidating collaboration, models of public-private collaboration, and governance developed;
  • (Objective 4)—creating career guidance that presents a complete and secured accompanied pathway for people from their first choice of orientation (at the age of 14, when compulsory education positions the pupil with regard to choosing) to professional reorientation, through the question of guidance and training in collaboration between the stakeholders in education/training and the company;
  • (Objective 5)—organizational benchmarking (benchmarking process description)—which clarifies what collaboration activities are expected to take place by means of the EDENCP.

4.1.2. Identifying and Selecting the Potential Factors, Features, and Elements (Step 2)

As mentioned in Section 2.1, in order to identify and select the potential factors, features, and elements (which could be implemented on EDENCP), the structures, models, and methods used in 15 cases of MC are reviewed, analyzed, and then summarized. Afterward, the selected items are accommodated in OGM-MCL and then customized for application to the EDENCP. Through the process of OGM-MCL customization for EDENCP, the partners and stakeholders attempt to use the proposed items or alter them (if needed) to suit the EDENCP’s preferences or requirements. For example, instead of supporting open access to the platform for all people, some access restrictions are considered for EDENCP.

4.1.3. Defining the Main Functions of EDENCP (Step 3)

The following eight functions are collaboratively defined for EDENCP by the project partners and stakeholders:
  • (Function 1)—developing an appropriate search engine;
  • (Function 2)—determining the aspects, components, and features of collaboration;
  • (Function 3)—managing training process;
  • (Function 4)—providing training execution support;
  • (Function 5)—designing curriculum;
  • (Function 6)—inserting new competence demands;
  • (Function 7)—providing suitable tools to evaluate the performances;
  • (Function 8)—providing a proper database/service that introduces and offers promising, validated, and trusted tools.
These functions refer to EDENCP functionality, ability, capabilities, and features that all together will provide the defined services in accordance with the specifications as set out in the project.

4.1.4. Evaluating the Adequacy of Selected Factors, Features, and Elements (Step 4)

To evaluate the “adequacy” of OGM-MCL for usage in EDENCP, a number of positive factors and specific features (mentioned in Section 2.1) that have potential applicability are picked out. To evaluate and benchmark the adequacy and importance of the nominated factors, features, and elements (items), they are addressed in 100 questions, forming the adequacy questionnaire (see Appendix A).
Each question in the questionnaire represents a potential item that might be used on the platform. The questions—based on the specifications and characteristics that they present—are classified under nine considered dimensions of collaboration namely, organizational, environmental, admission, behavioral, social, structural, functional, technological, and economical. This classification facilitates the presentation, analysis, and interpretation of the results of the evaluation.
The adequacy and importance of the selected items (to be considered for EDENCP) are asked and assessed by a checklist in the questionnaire. There are six possible answers in the checklist for each question, namely, strongly disagree (SDA), disagree (DA), agree (A), strongly agree (SA), I don’t know, and I’m not sure (now). The evaluators (partners, constituting a kind of “focus group” in terms of the Design Science Method) not only can choose one of these possible answers, but they can also insert comments and feedback (if needed) about each addressed item in each question. It is noteworthy to mention that this questionnaire provides a form of global evaluation of the considered dimensions and their respective items.
The questionnaire was sent to each partner of the ED-EN HUB consortium, and they were asked to respond to the questions collaboratively (with their internal involved members who are experienced in this field of study and work). Therefore, the questions in each questionnaire were answered via the collaboration and confluence of different minds rather than a single partner. This strategy not only helped reduce the number of questionnaires that were sent out, answered, and evaluated but also increased the accuracy and value of the given answers.
The main results of this step of evaluation (average of the popularity of adapted items for implementation on EDENCP) achieved from analyzing the five received questionnaires are summarized in Table 1. In the method for analyzing the obtained data from respondents and calculating the statistical answers given to the questions (addressed in the questionnaire), a decision was made to give weight to each answer in the checklist. The attributed weights are as follows: (SDA = 1), (DA = 2), (A = 3), (SA = 4), (I don’t know = 0), and (I’m not sure = 0). In the calculation, each answer (for a single question) was first multiplied with the attributed weight and then they were summed up and lastly divided by the total number of respondents. The received responses were analyzed manually.
To obtain an improved perspective of the results of this step of evaluation, they are also displayed as a radar chart in Figure 5.
Taking into account the given responses to the questions of the adequacy questionnaire (illustrated in Table 1 and Figure 5) and from the performed analysis, the following can be concluded:
  • The nine considered dimensions are generally accepted by all evaluators (partners) because the average popularity given to all dimensions is above 50% (an indicator of acceptance).
  • Among the considered dimensions, the organizational dimension and its respective items received the highest average of popularity (83.57%), whereas the economical dimension received the lowest average of popularity (58.50%) from the respondents’ point of view.
  • Analyzing all responses given to every single question (in all dimensions) shows that some of the selected and adapted items (that the average of their popularity is lower than 50%) need to be revised, improved, changed, or omitted (in some cases) before moving to the next phase of evaluations. In this direction, the provided feedback by the partners offered very good ideas of what other important points need to be addressed in further developments.

4.1.5. Evaluating the Adequacy of Functions (Step 4)

After evaluating the adequacy of the selected items, the defined functions for the EDENCP were also evaluated (from the adequacy point of view) during some plenary meetings of the partners and stakeholders. This evaluation focuses on judging whether or not the functions can adequately meet the objectives of EDENCP. At this step, the evaluation of the adequacy of the functions is performed at the conceptual level.
The results of evaluating the adequacy of defined functions are shown in Table 2. Considering the available information and focusing on theoretical and conceptual evaluation, the functions that show one or some signs and indications of adequacy for meeting one or some objectives of EDENCP are marked with (X) in Table 2. This table addresses the eight defined functions and the five considered objectives for EDENCP. This step of evaluation also provides a view of the potential interactions among the EDENCP’s functions and objectives.
The results of function evaluation give direction to function creation, development, and implementation. These results are used as a base for function feasibility evaluations.

4.1.6. Evaluating the Feasibility of Selected Factors, Features, and Elements (Step 5)

In this step of evaluation, the technical team of the project (from NOVA University Lisbon) attempted to assess and judge the possibility, ability, and “feasibility” of selected factors, features, and elements in supporting the EDENCP with the least amount of wasted time, money, and effort. In this manner, they tried to provide a fact-based understanding of the current level of the OGM-MCL’s feasibility and maturity. The gained insight was enriched by employing both knowledge-based questions and application-based techniques. In this process, not only the technical feasibility of OGM-MCL is evaluated based on available budget and technical capabilities but the technical risks of OGM-MCL application are also identified, quantified, and reported. In this step, evidence-based evaluations and data-driven decisions are also helpful strategies that were taken into consideration.
Afterward, the technical team proceeded to further assessment of the selected factors, features, and elements according to the EDENCP functional requirements. Hence, the second questionnaire is collaboratively developed to collect the opinion of partners about the judgment of the technical team. In this questionnaire, nine related questions were designed for each function, addressing the most important technical aspects (factors, features, and elements) that should be evaluated for feasibility assurance. The formulated questions are polar questions (Yes or No questions), and they are illustrated in Appendix B.
The results gained from the primary round of questionnaires (used for evaluating the adequacy of developed and adapted factors, features, and elements) are used in the project as a base for “feasibility” evaluation. That is, those addressed factors, features, and elements (in the questions) that received high popularity and their threshold is ≥80% are selected for consideration, further evaluation, and probable implementation on EDENCP.
Table 3 summarizes the results of the adequacy evaluation (shown in Table 2), plus the number of questions (used in the questionnaire) per dimension, and the number of questions per dimension for which its threshold is ≥80. A threshold of 80 was specifically suggested for the selection of potential items as it could create a balance between the number of factors, features, and elements addressed in the questionnaire and the sum of items needed for feasibility evaluation in the next stage. In other words, in the adequacy evaluation (step 4), 100 factors, features, and elements were addressed in the questionnaire (one item per question). The number of addressed items for application to the EDENCP is relatively high, aiming to provide a reasonable number of potential items for selection and also providing a chance for the evaluators to select the items from the list that are the best from their perspective. Thie is because, from a feasibility point of view, it is not cost-effective to implement all those items in the EDENCP. It is worth noting that the considered threshold (≥80, which is relatively high) caused a significant reduction in the number of considered factors, features, and elements (that should be used in the next steps of evaluation). This level of threshold and high percentages of popularity, however, provide a certain degree of assurance for feasibility consideration.
The graph illustrated in Figure 6 provides an improved perspective of the results presented in Table 3.
Taking into account the considered threshold and the gained results, the following should be highlighted:
  • The dimension of admission has the highest percentage of popularity at (75%), followed by the organizational dimension with (71.42%). However, the lowest percentage of popularity (given by the evaluators) belongs to the economical dimension (20%).
  • The low percentage of the popularity of some dimensions (in both situations, before and after considering the threshold), namely, economical, functional, and behavioral, shows that these dimensions did not receive high attention (in comparison with other considered dimensions) from the evaluators’ point of view.
  • Those dimensions that gained higher percentages of popularity, namely, admission, organizational, and technological, indicate that they have a high potential of feasibility (from the evaluators’ perspective) to be implemented in EDENCP. Thus, the focus of attention should be given to these dimensions.
In the second stage of feasibility evaluation (for the items), the questionnaire (presented in Appendix B) was sent to the same group of partners (who participated in the adequacy evaluation), but at this time, 18 evaluators individually responded to the questions, because the partners and stakeholders decided to come up with a mix of group and individual evaluations. The results achieved from the feasibility questionnaire are presented in Table 4. Those factors, features, and elements (that are highlighted with gray color) received popularity by over 60%. They are selected as potential items for the next step of evaluation (effectiveness). At this time the threshold of 60 was suggested by the partners and stakeholders, as it is believed that it can reasonably adjust the number of considered items for effectiveness evaluation. This means that we attempted to narrow down the results with different steps of evaluation and minimized the number of considered items to a logical number of possible and feasible items that can be integrated into EDENCP.
Taking into account the results shown in Table 4, the following should be underlined:
  • The low average of popularity given to the economical dimension (16.66%) and technological dimension (41.26%) shows that the majority of addressed factors, features, and elements in these two dimensions do not have much chance to be implemented in the EDENCP from the feasibility point of view.
  • The selected factors, features, and elements should be prioritized in the process of implementation on EDENCP. That is, those that received a higher percentage of popularity (e.g., factors, features, and elements in the social dimension used for function 5, with 100%) need to be given more attention and emphasis.
  • The dimensions also should be prioritized in the process of implementation in EDENCP. For example, among the addressed dimensions, the environmental dimension received a higher percentage of popularity (68.25%). In this manner, developers and decision makers can manage the resources for implementation according to dimensions’ feasibility and popularity.

4.1.7. Evaluating the Feasibility of Functions (Step 5)

According to the decision of partners and stakeholders, the evaluation of the “feasibility” of defined functions led to developing (collaboratively) their description. In addition, further function evaluation caused a merging of the first function (search engine) with the last function (database/service offering different tools). The first function is eventually changed to “search engine for finding information and tools”. Therefore, the number of functions reduced to seven, as addressed in Appendix B.
Following the collective evaluation of the defined function, their definition is developed as presented below:
  • (Function 1)—search engine for finding available information and tools on EDENCP: provides a software system that is designed to carry out specific searches related to particular competencies (of members), courses, activities, and supporting tools.
  • (Function 2)—collaboration: deals with collaborative practices whereby some members work together to complete a task, solve a problem, and/or achieve a shared goal.
  • (Function 3)—managing training: focuses on flexible strategies that can properly manage different aspects of training from program creation to evaluation and prioritizing learning needs.
  • (Function 4)—training execution support: provides the needed support for (a) training execution, (b) learning engagement strategies, and (c) implementation of performance-based assessment for training proficiency.
  • (Function 5)—designing curricula: is a cyclical, and analytical process that helps create a training framework that is able to facilitate and guide the creation of training programs in a particular domain. It integrates different training elements such as learning strategies, processes, materials, and experiences that may help design and develop such training program instructions.
  • (Function 6)—the insertion of new competencies demands is the process of identifying and adding the key competencies and basic skills (e.g., cognitive skills of critical thinking, problem-solving, and interpersonal skills) required to perform teaching and training with success. This function supports the identification of competencies that are highly demanding for the companies. It is designed for three main situations: (a) when an employer recognizes that his company needs new workers, but the new workers only arrived from the university and do not have the specifically needed competencies; (b) when the employer recognizes that the existing workers need to improve their competencies or gain new ones for the specific tasks; or (c) when the worker recognizes (by himself/herself) that he/she needs to gain some particular competencies. The main consequences of this function are to improve the existing curricula or add new ones based on the demands of the companies.
  • (Function 7)—tools to evaluate the performances (benchmarking): This is the function to evaluate the performance of (a) EDENCP in relation to its functions and (b) the workers of the companies against the transversal competencies that they have already gained. For doing so, it first needs a definition of some specific key performance indicators (KPIs) related to each performance.

4.1.8. Evaluating the Effectiveness of Selected Factors, Features, and Elements (Step 6)

In this step of evaluation, the technical partners of the project proceeded to assess the “effectiveness” of the selected factors, features, and elements, aiming to judge the degree of their success in achieving the objectives of EDENCP (mentioned in Section 4.1.1). This task is concerned with comparing (at this stage theoretically) the inputs of OGM-MCL with the desired outputs that it can deliver. Therefore, to find out whether or not the selected items are effective in obtaining the expected results, from the feasibility point of view, a third questionnaire is created. The evaluators/partners are asked to provide a rating for each question, showing how much the considered items are effective (from their perspective) in reaching the goals of OGM-MCL. As shown in Appendix C, the considered rates include not at all effective (rate 0), slightly effective (rate 1), moderately effective (rate 2), very effective (rate 3), and extremely effective (rate 4).
Considering the results shown in Table 4, items with a popularity of over 60% were selected for effectiveness evaluation, meaning that the items were evaluated at this time from an effectiveness perspective. In this step, the evaluators (partners and stakeholders), by giving a rate to each item, attempted to determine how much the addressed items are effective in the utilization implementation of EDENCP. Table 5 presents the results of this step of evaluation from five received questionnaires.
As shown in Table 5, except for the economical dimension (which reflected very low average popularity in the previous step of evaluation, so it was not considered in this step), the other dimensions have at least one item for effectiveness evaluation. From the results of this evaluation, the following can be said:
  • Generally, the results of this step of the evaluation show how effective the dimensions are from the evaluators’ perspective. Among the addressed dimensions, the organizational dimension obtained the highest average popularity (90%), whereas the technological dimension received the lowest average popularity (75%).
  • Those dimensions and their related items with a percentage of popularity (gained from this evaluation) lower than 80% (the considered threshold for this step) are not regarded as sufficiently effective items to be implemented on the EDENCP. This means that the items with a percentage of popularity at 75 (highlighted with the gray color in Table 5) were all taken out from the list of considerations.
  • In function implementation (step 7), dimensions that have a higher percentage of popularity should be prioritized. For example, in the implementation of function 1, the priority (based on available resources and capabilities) should be given to organizational (95%) and structural (95%), admission (90%), and social (80%) dimensions.

4.1.9. Evaluating the Effectiveness of Functions (Step 6)

In some online and face-to-face plenary meetings that the focus group had together, they held several discussions and arguments about the effectiveness of EDENCP’s functions. In these meetings, they shared their views and opinions on the following related questions:
  • Can the functions achieve the desired/targeted goals?
  • Can the functions gain a certain degree of success?
  • Can the functions produce the desired effect?
  • Can the functions be operated according to the project plan?
By critically assessing different aspects of functions’ effectiveness, the partners finally came to the conclusion that the existing (theoretical) evidence introduces a convincing impression of the effectiveness of the functions. That is, they decided to keep the functions as they are until reaching the results of efficiency evaluation (step 9). Then, after, they can accordingly make the needed decisions and actions if needed.
It should be added that in the specification phase (steps 1–6), some strategies are taken into consideration, namely, engaging the partners at different stages of evaluation, clarifying the evaluation process, collaboratively designing the process of evaluation and developing the questionnaires, gathering credible evidence, justifying the conclusions, and using and sharing lessons learned. The next step presents the implementation phase.

4.1.10. Adjusting the Selected Factors, Features, and Elements (Step 7)

An adjustment transaction makes the required changes to item values that can no longer be directly changed after the items are selected. Here, adjustment refers to a process that makes the slightly needed changes to the selected factors, features, and elements, especially to make them more correct, effective, and/or suitable for the implementation step.
The selected items are then used in designing, building up, and developing the EDENCP. The EDENCP has some tailored and customized features, including but not limited to optimization for search and the social web, easy mobile navigation, unlimited file storage, and promotion opportunities. The selected items and the developed functions will together shape and put up the EDENCP services for the end-users.

4.1.11. Implementing the Functions (Step 7)

The functions were implemented in EDENCP in programming languages by translating the definition of the function into a more machine-friendly form, which can either be directly executed by the machine/system or interpreted by another program that runs on the machine/system. The seven defined functions provide different services for different users at three main levels as mentioned below:
  • Individual/student level—provides services such as job seeking, internal and external mobility, certification, guidance, reconnection to the market, self-assessment, and social recognition.
  • Company/enterprise level—delivers services such as job and skill planning, internal mobility, recruitment, planning internships, sharing the vision with training and education actors, co-training, and developing partnerships.
  • Educational institutes level—provides services such as planning internships, finding required skills, meeting societal and employment actual and future needs, sharing the vision with companies, co-training, and developing partnerships.
Figure 7 provides a snapshot of the EDENCP environment. It demonstrates that the EDENCP is designed to deliver various services to different types of users.
It should be highlighted that the exploration phase is still in progress in the ED-EN HUB project at the stage of concluding this study. However, the results of its evaluation will be reported in future publications.

5. Evaluation

This section presents the fifth step of the DSRP method. Following the demonstration of OGM-MCL, the results of evaluating its validation to be used for EDENCP are presented in the following subsection.

Validation of Organizational and Governance Model for Using in EDENCP

In this study, the OGM-MCL is evaluated by considering and adopting the Technology Acceptance Model (TAM) methodology [62]. TAM is one of the most frequently employed models for research into new information technology acceptance. TAM is focused on the intention to use a new technology or innovation. TAM is specifically designed to measure, explain, and predict the acceptance and adaptation of information and communication technologies based on customer/user attitudes. A number of factors influence the decision of customers/users, such as perceived usefulness and perceived ease of use.
Furthermore, the validation and appropriateness of the OGM-MCL to be used in the ED-EN HUB project were initially evaluated by taking the following steps:
  • Determining the objectives of OGM-MCL validation;
  • Determining the needed tools for evaluation (questionnaires and interviews);
  • Determining the suitable criteria and parameters for evaluation of the OGM-MCL appropriateness (completeness, purposefulness, perceived usefulness, perceived ease of use, cost-effectiveness, and reasonability);
  • Developing related survey questionnaires;
  • Identifying the potential evaluators (experts, partners, and stakeholders in the projects);
  • Preparing and conducting required interviews with evaluators;
  • Performing validation tests;
  • Analyzing the collected feedback;
  • Reporting the results of the validation.
The developed questionnaire for this purpose, which is presented in Table 6, contains 11 questions, addressing the considered validation criteria and parameters mentioned above.
It is worth nothing that in line with the core constructs used for the TAM, the validation criteria and parameters are set by partners and stakeholders of the project with respect to the proposed criteria and parameters in the related literature, the strategic objectives of the project, and their expectations for EDENCP. The questionnaire contains six criteria and parameters and eleven questions. Each question should be rated on a 6-point Likert scale (Likert, 1932) including strongly disagree (SDA), disagree (DA), agree (A), strongly agree (SA), I don’t know (IDK), and I am not sure (IANS). The Likert scale questions are formulated to understand the level of agreement of respondents (project partners and stakeholders, i.e., a “focus group”) with the appropriateness of OGM-MCL. The questionnaire was sent to eight groups of focused partners/experts, and they are asked to respond to the questions with the collaboration of their internal team members who contribute to the project. The results of analyzing their answers/opinions are presented in Table 7. It should be added that this questionnaire was made and analyzed by “Survey-Monkey”.
Taking Table 7 into account, the following can be stated:
  • All considered criteria and parameters for evaluating the validation and appropriateness of OGM-MCL obtained a percentage over 50 (average), which is a reasonable indicator of general acceptance;
  • Among the eleven questions addressed in this survey questionnaire, only three questions that are related to the criteria “perceived ease of use” and “cost-effective” had a percentage between 56% to 63%. The other eight questions had a percentage ≥ 68.75. The average percentage given to all criteria and parameters is 72. This shows a convincing indicator of model compliance.
  • The given answers/feedbacks show that there is no “strong disagreement” for the addressed points (criteria and parameters). In fact, there are only seven “disagreements” in total, which is not high.
  • Taken together, there were 39 “agreements” and 28 “strong agreements”, which is a considerable positive attitude towards the addressed criteria and parameters and also the validation and appropriateness of the OGM-MCL.
  • Taken together, there are only 10 answers that claim that “I am not sure” and 4 answers that said, “I don’t know”. Indeed, this rate is not high at all.
Due to the fact that the OGM-MCL is evaluated theoretically and conceptually at this stage in this project, a percentage of disagreement, ambiguity, and uncertainty is understandable.
It should be noted that the OGM-MCL can be applied to a specific MCL case either fully or partially. This decision should be made by the decision makers in the target case. In the ED-EN HUB project, all parts addressed in the model are potentially considered to be applied to the EDENCP.

6. Discussion and Conclusions

This Section first provides a brief comparison of the proposed approach with previous studies; then, it provides an interpretation according to the main findings and contributions of this work. Afterward, it summarizes the hypothesis and purpose of the study, and at the end, it provides some conclusions by addressing the main limitations of the study as well as pointing out directions for future studies.
Collaborative learning initiatives are evolving and growing faster than ever. MCL, for example, has opened immense opportunities for ordinary people in societies to engage locally or globally in vocational and informal learning practices. This approach of collective learning, which is addressed by people of different areas, is viewed and presented from different perspectives and for different purposes by the researchers of each discipline. Even so, this specific complementary approach to learning and wonderful experience has not yet been revealed in an integrated perspective to all people worldwide [3]. Nevertheless, in the last decade, numerous studies have been published on MCL environments and a number of these studies focus mostly on specific models (e.g., network models), techniques (e.g., network analysis techniques), and approaches (e.g., network-based approaches) to support the supervision of online learning communities (e.g., MCL) [63]. On the other side, to cope with the current information explosion and information overload (which is a common and challenging issue in the MCL environment), the authors of [12] tried to take the advantage of the application of natural language processing (NLP). Another study [64] proposed first-order probabilistic reasoning techniques (that benefit from machine-learning techniques) to estimate the quality of knowledge (e.g., consistency, relevance, and scalability) created and shared by mass collaborative efforts.
Despite the advances made during the last years in this emerging and promising method of learning, there are still some ambiguities or uncertainties in the way of creating, organizing, developing, and governing MCL initiatives. This study is in line with previous studies (that is addressed above to a limited extent) since it proposes the OGM-MCL framework and attempts to address a number of important organizational and governance elements that can be considered by the decision makers, designers, and developers when involved in the creation, operation, and development of MCL initiatives.
In our previous study [65], different parts and components of OGM-MCL were introduced and elaborated on in detail. In the current study, aside from the representation of the proposed model and applied methodology, we tried to report the results gained from the practical evaluation, validation, and application of the OGM-MCL. To demonstrate and evaluate the validity and applicability of OGM-MCL (by taking into account the DSRP), the model has been used in the implementation of the EDENCP environment in the ED-EN HUB project. The contribution of OGM-MCL to this European research project provided a set of reliable assessments, reasonable validations, and successful applications. The ED-EN HUB project has applied OGM-MCL to support the creation, operation, management, and implementation of their digital collaboration platform [65] even though OGM-MCL was first customized collaboratively by the partners and stakeholders of the project based on the objectives, requirements, and conditions of each project.
By employing the governance process, different aspects of OGM-MCL have been assessed by (a) taking some phases and steps of evaluation, (b) using some developed questionnaires, (c) continuous interaction and discussion with various partners and stakeholders of the project (“focus group”), and (d) collaboration with several expert evaluators. The OGM-MCL was globally accepted by the partners and stakeholders of the project, suggesting that the validation of the OGM-MCL and its potential can be applied to other related projects and cases. Additionally, close interaction with the experts, researchers, and stakeholders in the project was a very valuable opportunity for the authors, as their useful feedback and constructive suggestions provided some direction for improving different parts of the OGM-MCL.
Through the validation and application of OGM-MCL and its related solutions in the project, the hypothesis is validated to a reasonable extent. Thus, it can be concluded that community learning could be effectively supported through MC when three work streams are properly rooted in the foundation of a community: namely, by (I) identifying the positive and negative factors in existing and emerging successful examples of MC; (II) adopting contributions from collaborative networks in terms of structural and behavioral models; and (III) establishing adequate learning assessment indicators and metrics.
Despite the positive and promising features of MCL and the opportunities it can provide for societies, communities, and learners, MCL (as a type of social network) also faces a huge number of challenges and limitations. For example, in general, MCL must deal with the challenge of determining and controlling the quality and reliability of shared content (e.g., knowledge, information) within the community. In fact, mass collaboration is regarded as a double-edged sword when it comes to learning. On one hand, it is relatively low-cost and accessible, and it can potentially facilitate knowledge sharing and increase public awareness. On the other hand, the (large) size and environment (online) of the community can potentially place it at risk in terms of encountering, involving, abusing, and receiving damage from unreliable content. In addition, the anonymity of community participants can likely intensify the problem. Since MCL is typically supported by a public platform, any participant can post any content with various degrees of veracity. Spreading unhealthy content throughout the community can negatively influence its members (e.g., by being misinformed or misled). Thus, it is left to community members to recognize whether the content is true or not. Unfortunately, this is a dark side of MCL. In particular, the ED-EN HUB project faced some limitations in the evaluation of OGM-MCL:
  • We identified that some concepts (e.g., MC, MCL, OGM-MCL, and governance process) are often vague and confusing for the partners and stakeholders of the project
  • Theoretically and conceptually evaluating different aspects of the OGM-MCL is an arduous and daunting task.
Considering the pioneering nature of this research work, it becomes clear that many doors are open for future research. MCL, as an interdisciplinary approach, is a promising and demanding subject with multiple potential areas of application. Although several aspects in the MCL context (e.g., organizational and governance elements) are already identified, they need to be improved, applied to different contexts, and consolidated by further validation. Furthermore, in the process of evaluation, further validation, and application of OGM-MCL in other case studies, using different approaches, techniques, and tools might help cover up its weaknesses. In future work, the remaining phase and step of the governance process will be refined. Furthermore, in the project, we will proceed with deep data analysis through the process of inspecting and evaluating the implemented platform by using feedback data from users. As an example, the search engine function may acquire feedback from users in relation to the obtained results, if they were effectively found, etc. The result of the data analysis against the improvements made will then be published in future studies. This activity will bring us one step closer to the full validation of the OGM-MCL. In addition, the validity and applicability of the OGM-MCL will be evaluated by using other projects, case studies, and/or illustrations.

Author Contributions

Conceptualization, M.Z., J.S. and L.M.C.-M.; methodology, M.Z., J.S. and L.M.C.-M.; validation, M.Z., J.S., L.M.C.-M. and R.J.-G.; formal analysis, M.Z., J.S. and L.M.C.-M.; investigation, M.Z., J.S., L.M.C.-M. and R.J.-G.; resources, J.S. and R.J.-G.; data curation, M.Z., J.S., L.M.C.-M. and R.J.-G.; writing—original draft preparation, M.Z., J.S., L.M.C.-M. and R.J.-G.; writing—review and editing, M.Z., J.S. and L.M.C.-M.; visualization, M.Z., J.S., L.M.C.-M. and R.J.-G.; supervision, J.S., L.M.C.-M. and R.J.-G.; project administration, J.S. and R.J.-G.; funding acquisition, J.S., L.M.C.-M. and R.J.-G. All authors have read and agreed to the published version of the manuscript.


Fundação para a Ciência e Tecnologia (project UIDB/00066/2020) and European Commission ERASMUS + through grant n° 2020-1-FR01-KA202-080231 ED-EN HUB.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are presented in the main text.


This study was supported by the Center of Technology and Systems (CTS-UNINOVA).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Part of the questionnaire was used to evaluate the adequacy of OGM-MCL in its application on EDENCP.
Table A1. Part of the questionnaire was used to evaluate the adequacy of OGM-MCL in its application on EDENCP.
The Questionnaire Used for Assessing the Adequacy of OGM-MCL on EDENCP
Considered ElementsMain Features That Might Be Integrated into EDENCPChecklist
Organizational dimensionIt is important that the EDENCP be a user-driven service (users may be co-creators of the service). Applsci 12 08356 i001Applsci 12 08356 i002Applsci 12 08356 i003Applsci 12 08356 i004
I don’t knowI’m not sure
It is important that the EDENCP engages diverse groups (e.g., the general public, experts, and professionals) in the process of learning.SDADAASA
I don’t knowI’m not sure
Environmental dimensionIt is important that the EDENCP be open for all people to contribute. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP provides three levels of access (for three groups of users: partners, administrators, and general users). SDADAASA
I don’t knowI’m not sure
Admission dimensionInclusion
It is important that the EDENCP facilitates the process of joining (inclusion) the groups, hubs, and communities.SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP provides free access for all users.SDADAASA
I don’t knowI’m not sure
Accessibility and Proximity
To promote the quality of contributions and develop transparency, it is important that the EDENCP reduces anonymity. SDADAASA
I don’t knowI’m not sure
It is important that the username be associated with the user’s contributions (to facilitate the monitoring of contributions). SDADAASA
I don’t knowI’m not sure
Social dimensionCollaboration
It is important that the EDENCP builds a network for career development. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide a "discussion forum" for collaboration.SDADAASA
I don’t knowI’m not sure
Functional dimensionContent Management
It is important that the users could support the process of creating, sharing, and developing the content.SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could continually develop and update the content.SDADAASA
I don’t knowI’m not sure
Operation Management
It is important that the EDENCP could save users’ personal information and contributions to their profiles.SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide a "monitoring system" to constantly monitor the transactions. SDADAASA
I don’t knowI’m not sure
Interaction Management
It is important that the EDENCP could provide an appropriate service for internal interactions such as sharing the resources, training, and learning materials. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide an appropriate service for external interactions such as exchanging expertise and findings. SDADAASA
I don’t knowI’m not sure
Human Resource Management
The users should be treated equally. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide an advisory board.SDADAASA
I don’t knowI’m not sure
Economical dimensionSupports and Services
How important do you think the following 2 services could be for the economic sustainability of the platform:
Benefiting from private and public funding, grants, financial aids and donations, capital from investors and sponsors, and advertising. SDADAASA
I don’t knowI’m not sure
Providing supportive training and learning services for schools, organizations, institutions, businesses, and companies. SDADAASA
I don’t knowI’m not sure
Technological dimensionIt is important that the EDENCP could provide web-based communication. SDADAASA
I don’t knowI’m not sure
It is important that the EDENCP could provide a search engine that helps participants to find particular information and services provided in the hub. SDADAASA
Structural dimensionParticipants
It is important that users from any age, background, culture, and gender could contribute to EDENCP. SDADAASA
I don’t knowI’m not sure
Users will not be paid and they will contribute to the volunteer base. SDADAASA
I don’t knowI’m not sure
Roles and Tasks
It is important that the users could play different roles (e.g., expert, advisor, trainer, trainee, editorial, researcher, technical, managerial) based on their qualifications. SDADAASA
I don’t knowI’m not sure
It is important that the users could engage in multiple tasks (e.g., training execution, providing learning content, delivering the contents, exchanging the contents, executing, providing support, commenting, reporting). SDADAASA
I don’t knowI’m not sure
Behavioral dimensionThe partners and administrators have the authority to bring about structural changes in the EDENCP SDADAASA
I don’t knowI’m not sure
The general users can contribute to decision-making processes.SDADAASA
I don’t knowI’m not sure

Appendix B

Table A2. Part of the questionnaire was used to evaluate the feasibility of OGM-MCL in its application on EDENCP.
Table A2. Part of the questionnaire was used to evaluate the feasibility of OGM-MCL in its application on EDENCP.
The Questionnaire Used for Assessing the Feasibility of OGM-MCL on EDENCP
FunctionsConsidered Factors and Features That Might be Implemented on EDENCP Answers
F1It should be open to all and be used for multiple purposes (e.g., searching, sorting info)YESNO
It should have different access levels and provide a comprehensive list of available infoYESNO
F2It should be able to engage different members in multiple communication tasks YESNO
It should be able to create collaboration spaces around different domains/topicsYESNO
F3It should be able to use different characteristics to manage the training planningYESNO
It should be able to consider the use of different training modes in the training planningYESNO
F4It should be able to define clear inclusive rules for admissionYESNO
It should be able to provide Learning Management System (e.g., moodle)YESNO
F5It should be able to create diverse groups profile for contributionYESNO
It should be able to make open to all the processes of curriculum designYESNO
F6It should make open to all the insertion of new competencies demandsYESNO
It should be able to suggest concepts for competence demand writingYESNO
F7It should be able to authorize users for taking different roles in the evaluation processYESNO
It should have easy and free access to verification of the evaluation resultsYESNO

Appendix C

Table A3. The questionnaire was used to evaluate the effectiveness of OGM-MCL in its application on EDENCP.
Table A3. The questionnaire was used to evaluate the effectiveness of OGM-MCL in its application on EDENCP.
Questionnaire for Evaluating the Effectiveness of Considered Dimensions of Collaboration and Their Related Factors, Features, and Elements that Have the Potential to be Implemented on EDENCP
1. Questions for Search Engines for Finding Information and Tools01234
1. It is effective to be used for multiple purposes (e.g., searching, sorting info)
2. It is effective to be flexible to change (e.g., open to add new info or keywords for searching)
3. It is effective to have cross-functional capabilities (e.g., searching multiple features)
4. It is effective to be customizable (e.g., contains searching features to adapt to users’ profiles)
5. It is effective to use advanced functions (e.g., search in other external hubs)
2. Questions for Collaboration
1. It is effective to engage a diverse group of people in multiple communication tasks (e.g., chat forum)
2. It is effective to create collaboration spaces around different domains/topics
3. It is effective to promote the available skills or make a set of needed competences
4. It is effective to invite users to voluntarily contribute to collaborative activities based on their interest
3. Questions for Managing Training
1. It is effective to use different characteristics to manage the training planning (e.g., cost, success rate, learning needs)
2. It is effective to use different training modes (blended learning) in the training planning
3. It is effective to consider inclusive training and learning
4. It is effective to give freedom to choose the courses (help to prioritize learning needs based on the user’s profile)
5. It is effective to create a training plan such as evaluation of the contents collectively (have a procedure to perform)
4. Questions for Training Execution Support
1. It is effective to analyze what kind of approach or balance can be best used in training execution for different training modes (virtual vs. traditional classes)
2. It is effective to define clear inclusive rules for admission
3 It is effective to use analytics over-collected performance data to improve further training execution and its learning engagement strategies (i.e., knowledge management)
5. Questions for Design Curriculum
1. It is effective to create diverse groups profile for contribution
2. It is effective to design a curriculum based on the demands of companies
3. It is effective to share training elements (e.g., learning strategies, processes, materials, and experiences) for collaborative designing curriculum
4. It is effective to facilitate the process of continuous curriculum adaption
6. Questions for Insertion of New Competencies Demands
1. It is effective to make open to all the insertion of new competencies demands
2. It is effective to create an easy invitation procedure for the contribution of new companies
3. It is effective to encourage volunteer and active participation in finding new competence demands
4. It is effective to facilitate open discussion about the new competence demands
7. Questions for Tools to Evaluate Performance
1. It is effective to create different levels of evaluation
2. It is effective to authorize users for taking different roles in the evaluation process
3. It is effective to have easy and free access to verification of the evaluation results
4. It is effective to create different roles for the evaluation procedure (e.g., evaluators, users, programmers)


  1. Zamiri, M.; Camarinha-Matos, L.M. Mass Collaboration and Learning: Opportunities, Challenges, and Influential Factors. Appl. Sci. 2019, 9, 2620. [Google Scholar] [CrossRef]
  2. Camarinha-Matos, L.M.; Afsarmanesh, H. Collaborative networks: A new scientific discipline. Intell. Manuf. 2005, 16, 439–452. [Google Scholar] [CrossRef]
  3. Cress, U.; Jeong, H.; Moskaliuk, J. Mass Collaboration as an Emerging Paradigm for Education? Theories, Cases, and Research Methods. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  4. Zamiri, M.; Camarinha-Matos, L.M. Organizational Structure for Mass Collaboration and Learning. In Technological Innovation for Industry and Service Systems; DoCEIS 2019. IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2019; Volume 553, pp. 14–23. [Google Scholar] [CrossRef]
  5. West, R.E.; Williams, G.S. I don’t think that word means what you think it means: A proposed framework for defining learning communities. Educ. Technol. Res. Dev. 2017, 65, 1569–1582. [Google Scholar] [CrossRef]
  6. Zamiri, M.; Camarinha-Matos, L.M. A Mixed Method for Assessing the Reliability of Shared Knowledge in Mass Collaborative Learning Community. In Technological Innovation for Applied AI Systems; DoCEIS 2021. IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2021; Volume 626. [Google Scholar]
  7. Hew, K.F. Determinants of success for online communities: An analysis of three communities in terms of members’ perceived professional development. Behav. Inf. Technol. 2009, 28, 433–445. [Google Scholar] [CrossRef]
  8. Wang, P.; Ramiller, N.C. Community Learning in Information Technology Innovation. MIS Q. 2009, 33, 709–734. [Google Scholar] [CrossRef]
  9. Hipp, K.K.; Huffman, J.B. Professional Learning Communities: Assessment-Development-Effects. In Proceedings of the International Congress for School Effectiveness and Improvement, Sydney, Australia, 5–8 January 2003. [Google Scholar]
  10. Trust, T. Professional Learning Networks Designed for Teacher Learning. J. Digit. Learn. Teach. Educ. 2012, 28, 133–138. [Google Scholar] [CrossRef]
  11. Wenger, E. Communities of Practice: Learning, Meaning, and Identity, 1st ed.; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  12. Habernal, I.; Daxenberger, J.; Gurevych, I. Mass Collaboration on the Web: Textual Content Analysis by Means of Natural Language Processing. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  13. Fischer, G. Exploring, Understanding, and Designing Innovative Socio-Technical Environments for Fostering and Supporting Mass Collaboration. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  14. Eimler, S.C.; Neubaum, G.; Mannsfeld, M.; Krämer, N.C. Altogether now! Mass and small group collaboration in (open) online courses—A case study. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  15. Cress, U.; Feinkohl, I.; Jirschitzka, J.; Kimmerle, J. Mass collaboration as co-evolution of cognitive and social systems. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  16. Fu, W.T. From distributed cognition to collective intelligence: Supporting cognitive search to facilitate massive collaboration. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  17. Halatchliyski, I. Theoretical and empirical analysis of networked knowledge. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  18. Morris, T.; Mietchen, D. Collaborative Structuring of Knowledge by Experts and the Public. In Proceedings of the 5th Open Knowledge Conference, London, UK, 24 April 2010. [Google Scholar]
  19. Diver, P.; Martinez, I. MOOCs as a massive research laboratory: Opportunities and challenges. Distance Educ. 2015, 36, 5–25. [Google Scholar] [CrossRef]
  20. Flanagin, A.J.; Metzger, M.J. From Encyclopædia Britannica to Wikipedia—Generational Differences in the Perceived Credibility of Online Encyclopedia Information. Inf. Commun. Soc. 2011, 14, 355–374. [Google Scholar] [CrossRef]
  21. Wikipedia. Available online: (accessed on 10 May 2022).
  22. Gunelius, S. Overview of Digg. Available online: (accessed on 10 May 2022).
  23. Digg: What Is Digg? Available online: (accessed on 10 May 2022).
  24. Yahoo. Available online: (accessed on 10 May 2022).
  25. SETI@home. Available online: (accessed on 10 May 2022).
  26. Scratch. Available online: (accessed on 10 May 2022).
  27. GalaxyZoo. Available online: (accessed on 10 May 2022).
  28. Foldit. Available online: (accessed on 10 May 2022).
  29. Wikipedia. Available online: (accessed on 10 May 2022).
  30. Climate CoLab. Available online: (accessed on 10 May 2022).
  31. Wikipedia. Available online: (accessed on 10 May 2022).
  32. DonationCoder. Available online: (accessed on 10 May 2022).
  33. Expert Exchange. Available online: (accessed on 10 May 2022).
  34. Waze. Available online: (accessed on 10 May 2022).
  35. Makerspace. Available online: (accessed on 10 May 2022).
  36. SAP Community. Available online: (accessed on 10 May 2022).
  37. Camarinha-Matos, L.M.; Afsarmanesh, H. Collaborative Networks: Value creation in a knowledge society. In Proceedings of the International Conference on Programming Languages for Manufacturing (PROLAMAT’06), Shanghai, China, 14–16 June 2006. [Google Scholar] [CrossRef]
  38. Camarinha-Matos, L.M.; Afsarmanesh, H.; Ermilova, E.; Ferrada, F.; Klen, A.; Jarimo, T. ARCON reference models for collaborative networks. In Collaborative Network: Reference Modeling; Springer: Berlin/Heidelberg, Germany, 2008; pp. 83–112. [Google Scholar] [CrossRef]
  39. Ellmann, S.; Eschenbaecher, J. Collaborative Network Models: Overview and Functional Requirements. In Virtual Enterprise Integration: Technological and Organizational Perspectives; Idea Group Pub: Hershey, PA, USA, 2005; pp. 102–123. [Google Scholar] [CrossRef]
  40. Camarinha-Matos, L.M.; Afsarmanesh, H. Behavioral aspects in collaborative enterprise networks. In Proceedings of the 9th IEEE International Conference on Industrial Informatics, Lisbon, Portugal, 26–29 July 2011. [Google Scholar] [CrossRef]
  41. Esteves, J.; Curto, J.A. Risk and Benefits Behavioral Model to Assess Intentions to Adopt Big Data. Intell. Stud. Bus. 2013, 3, 37–46. [Google Scholar] [CrossRef]
  42. Li, P.C.; Lu, H.K.; Liu, S.C. Towards an Education Behavioral Intention Model for E-Learning Systems: An Extension of UTAUT. Theor. Appl. Inf. Technol. 2013, 47, 1120–1127. [Google Scholar]
  43. Leading SDG 4—Education 2030. Available online: (accessed on 10 May 2022).
  44. Maki, P.L. Developing an assessment plan to learn about student learning. Acad. Librariansh. 2002, 28, 8–13. [Google Scholar] [CrossRef]
  45. Harlen, W. The Quality of Learning: Assessment alternatives for primary education. In Primary Review Research Survey 3/4; Faculty of Education, University of Cambridge: Cambridge, UK, 2007. [Google Scholar]
  46. Nagowah, S.D.; Nagowah, L. Assessment Strategies to enhance Students’ Success. In Proceedings of the IASK International Conference “Teaching and Learning”, Porto, Portugal, 7–9 December 2009. [Google Scholar]
  47. Aihara, S. Assessment Indicators as a Tool of Process Monitoring, Benchmarking and Learning Outcomes Assessment: Features of Two Types Indicators. Inf. Eng. Express 2016, 2, 45–54. [Google Scholar] [CrossRef]
  48. Sae-Khow, J. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions. Turk. Online J. Educ. Technol. 2014, 13, 35–43. [Google Scholar]
  49. Scheffel, M.; Drachsler, H.; Stoyanov, S.; Specht, M. Quality Indicators for Learning Analytics. Educ. Technol. Soc. 2014, 17, 117–132. [Google Scholar]
  50. Ferreira, R.P.; Silva, J.N.; Strauhs, F.R.; Soares, A.L. Performance Management in Collaborative Networks: A Methodological Proposal. Univers. Comput. Sci. 2011, 17, 1412–1429. [Google Scholar]
  51. Dlouhá, J.; Barton, A.; Janoušková, S.; Dlouhý, J. Social learning indicators in sustainability-oriented regional learning networks. J. Clean. Prod. 2013, 49, 64–73. [Google Scholar] [CrossRef]
  52. Shavelson, R.J.; Zlatkin-Troitschanskaia, O.; Mariño, J.P. Performance indicators of learning in higher education institutions: An overview of the field. In Research Handbook on Quality, Performance and Accountability in Higher Education; Hazelkorn, E., Coates, H., McCormick, A.C., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2018. [Google Scholar]
  53. Ferreira, P.S.; Shamsuzzoha, A.H.M.; Toscano, C.; Cunha, P. Framework for performance measurement and management in a collaborative business environment. Int. J. Product. Perform. Manag. 2012, 61, 672–690. [Google Scholar] [CrossRef]
  54. Bititci, U.; Garengo, P.; Dörfler, V.; Nudurupati, S. Performance measurement: Challenges for tomorrow. Int. J. Manag. Rev. 2012, 14, 305–327. [Google Scholar] [CrossRef]
  55. Varamäki, E.; Kohtamäki, M.; Järvenpää, M.; Vuorinen, T.; Laitinen, E.K.; Sorama, K.; Wingren, T.; Vesalainen, J.; Helo, P.; Tuominen, T.; et al. A framework for network level performance measurement system in SME networks. Int. J. Netw. Virtual Organ. 2008, 3, 415–435. [Google Scholar] [CrossRef]
  56. Adel, R. Collaborative networks’ performance index. Middle East J. Manag. 2015, 2, 212–230. [Google Scholar] [CrossRef]
  57. Camarinha-Matos, L.M.; Abreu, A. Performance indicators for collaborative networks based on collaboration benefits. Prod. Plan. Control. 2007, 18, 592–609. [Google Scholar] [CrossRef]
  58. Haddadi, F.; Yaghoobi, T. Key indicators for organizational performance measurement. Manag. Sci. Lett. 2014, 4, 2021–2030. [Google Scholar] [CrossRef]
  59. Peffers, K.; Tuunanen, T.; Gengler, C.E.; Rossi, M.; Hui, W.; Virtanen, V.; Bragge, J. The Design Science Research Process: A Model for Producing and Presenting Information Systems Research. In Proceedings of the 1st International Conference on Design Science Research in Information Systems and Technology, Claremont, CA, USA, 24–25 February 2006. [Google Scholar]
  60. Geerts, G.L. A design science research methodology and its application to accounting information systems research. Int. J. Account. Inf. Syst. 2011, 12, 142–151. [Google Scholar] [CrossRef]
  61. ed-en hub. Available online: (accessed on 10 May 2022).
  62. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  63. Ulrich, H.H.; Harrer, A.; Göhnert, T.; Hecking, T. Applying Network Models and Network Analysis Techniques to the Study of Online Communities. In Mass Collaboration and Education; Computer-Supported Collaborative Learning Series; Cress, U., Moskaliuk, J., Jeong, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 16. [Google Scholar] [CrossRef]
  64. Richardson, M.; Domingos, P. Building Large Knowledge Bases by Mass Collaboration. In Proceedings of the International Conference on Knowledge Capture—K-CAP ’03, Sanibel Island, FL, USA, 23–25 October 2003; pp. 129–137. [Google Scholar] [CrossRef]
  65. Zamiri, M.; Camarinha-Matos, L.M.; Sarraipa, J. Meta-Governance Framework to Guide the Establishment of Mass Collaborative Learning Communities. Computers 2022, 11, 12. [Google Scholar] [CrossRef]
Figure 1. DSRP steps.
Figure 1. DSRP steps.
Applsci 12 08356 g001
Figure 3. Governance process to demonstrate the OGM-MCL.
Figure 3. Governance process to demonstrate the OGM-MCL.
Applsci 12 08356 g003
Figure 4. Demonstration of OGM-MCL in the EDENCP context.
Figure 4. Demonstration of OGM-MCL in the EDENCP context.
Applsci 12 08356 g004
Figure 5. Average popularity of 9 dimensions obtained from adequacy evaluation.
Figure 5. Average popularity of 9 dimensions obtained from adequacy evaluation.
Applsci 12 08356 g005
Figure 6. Results of adequacy evaluation and considered threshold that were used for feasibility evaluation.
Figure 6. Results of adequacy evaluation and considered threshold that were used for feasibility evaluation.
Applsci 12 08356 g006
Figure 7. EDENCP environment is customized for different types of users and services.
Figure 7. EDENCP environment is customized for different types of users and services.
Applsci 12 08356 g007
Table 1. Average of popularity given to the dimensions in adequacy evaluation.
Table 1. Average of popularity given to the dimensions in adequacy evaluation.
Considered Dimensions Number of Questions Per DimensionAverage of Popularity Gained from Adequacy Evaluation
Table 2. The results of evaluating the adequacy of EDENCP functions.
Table 2. The results of evaluating the adequacy of EDENCP functions.
ED-EN HUB Main Processes
EDENCP FunctionsEDENCP Objectives
Determine Skills RequirementsCo-Design, Develop, and TrainingDetect, Assess, and CertifyCareer GuidanceOrganizational Benchmarking
F1. Search engine xxxxx
F2. Collaborationxxxxx
F3. Managing training x
F4. Training execution support x
F5. Design Curriculumxx
F6. Insertion of new competencies demandsxx x
F7. Tools to evaluate performance (benchmarking) xxx
F8. Database/service offering different tools xxxxx
Table 3. Results of adequacy evaluation and considered threshold that were used for feasibility evaluation.
Table 3. Results of adequacy evaluation and considered threshold that were used for feasibility evaluation.
Results of Adequacy
Number of Questions Per DimensionNumber of Questions per Dimension That Their Threshold Is ≥80Applsci 12 08356 i005The Base for Consideration in Feasibility Evaluation
Table 4. Results of the second step of feasibility evaluation.
Table 4. Results of the second step of feasibility evaluation.
FunctionsAverage of Popularity
of Functions
F1. Search engine55.55%83.33%55.56%66.67%50.00%66.67%66.67%33.33%16.67%61.11%
F2. Collaboration53.08%72.22%83.33%66.67%11.11%72.22%44.44%44.44%33.33%50.00%
F3. Managing training50.61%72.22%94.44%22.22%66.67%61.11%38.89%72.22%11.11%16.67%
F4. Training execution support52.46%50.00%83.33%50.00%66.67%50.00%61.11%55.56%11.11%44.44%
F5. Design curriculum53.70%72.22%38.89%83.33%55.56%38.89%100.00%61.11%5.56%27.78%
F6. Competences demands52.46%50.00%61.11%50.00%83.33%61.11%61.11%38.89%22.22%44.44%
F7. Evaluate performance51.85%55.56%61.11%61.11%66.67%61.11%44.44%55.56%16.67%44.44%
Table 5. Results of effectiveness evaluation.
Table 5. Results of effectiveness evaluation.
F1. Search engine95% 90% 95%80% 75%
F2. Collaboration90%95%75% 90%
F3. Managing training95%95% 80%75% 90%
F4. Training execution support 90% 80% 80%
F5. Design curriculum80% 95% 95%75%
F6. Competences demands 80% 90%80%75%
F7. Evaluate performance 75%80%90%80%
Sum90%87%85%85%84%82.5%82.5% 75%
Table 6. Questionnaire used for evaluating the validation and appropriateness of OGM-MCL.
Table 6. Questionnaire used for evaluating the validation and appropriateness of OGM-MCL.
Criteria and ParametersQuestionsSDADAASAIDKIANS
  • The OGM-MCL encompasses the necessary parts for the proper evaluation of the identified items that might be used in the creation, development, and implementation of EDENCP.
The OGM-MCL comprises the necessary steps for the proper evaluation of the considered functions of EDENCP.
The OGM-MCL can provide satisfactory results.
The OGM-MCL can create the expected value.
Perceived usefulness
The OGM-MCL is useful for evaluating the identified items that might be used in (the creation, development, and implementation) of EDENCP.
The OGM-MCL is useful for evaluating the considered functions of EDENCP.
Perceived ease of use
The OGM-MCL is clear and easy to understand.
The OGM-MCL is clear and easy to follow.
The OGM-MCL helps us to save resources (e.g., time, effort, and costs) in identifying the required features and capabilities for EDENCP.
The OGM-MCL can meet the expectations in identifying the required features that might be used in the creation, development, and implementation of EDENCP.
The OGM-MCL has a reasonable chance of success in the evaluation of the considered functions of EDENCP.
Table 7. Results of evaluating the validation and appropriateness of OGM-MCL for EDENCP.
Table 7. Results of evaluating the validation and appropriateness of OGM-MCL for EDENCP.
Criteria and ParametersFeedback NumberQuestionsWeighted AveragePercentagesSDADAASAIANSIDK
Completeness Purposefulness8Q1375%003311
Perceived Usefulness 8Q53.2882%005300
Perceived ease of use 8Q72.3859.50%013220
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zamiri, M.; Sarraipa, J.; Camarinha-Matos, L.M.; Jardim-Goncalves, R. An Organizational and Governance Model to Support Mass Collaborative Learning Initiatives. Appl. Sci. 2022, 12, 8356.

AMA Style

Zamiri M, Sarraipa J, Camarinha-Matos LM, Jardim-Goncalves R. An Organizational and Governance Model to Support Mass Collaborative Learning Initiatives. Applied Sciences. 2022; 12(16):8356.

Chicago/Turabian Style

Zamiri, Majid, João Sarraipa, Luis M. Camarinha-Matos, and Ricardo Jardim-Goncalves. 2022. "An Organizational and Governance Model to Support Mass Collaborative Learning Initiatives" Applied Sciences 12, no. 16: 8356.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop