Next Article in Journal
Detecting Fake News in Urdu Language Using Machine Learning, Deep Learning, and Large Language Model-Based Approaches
Previous Article in Journal
Hybrid Deep Learning Model for Improved Glaucoma Diagnostic Accuracy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Requirement Analysis for a Qualifications-Based Learning Model Platform Using Quantitative and Qualitative Methods

Faculty of Mathematics and Computer Science, University of Hagen, Universitätsstraße 47, 58097 Hagen, Germany
*
Authors to whom correspondence should be addressed.
Information 2025, 16(7), 594; https://doi.org/10.3390/info16070594
Submission received: 13 May 2025 / Revised: 4 July 2025 / Accepted: 8 July 2025 / Published: 10 July 2025

Abstract

Continuous learning is fundamental to professional and personal growth. Therefore, robust digital solutions are required to support adaptable and scalable educational demands. The Qualifications-Based Learning Model (QBLM) is a framework for qualifications-based learning, defining a software architecture and data models. However, the existing implementations of QBLM lack horizontal scalability and flexibility, which results in tightly coupled components that limit the adaptability of the model to the evolving needs of learners and institutions. Therefore, a new Qualifications-Based Learning Platform (QBLM Platform) is planned, which extends the QBLM approach by utilizing a modular software architecture that enables flexible service integration, scalability, and operational resilience. However, to design such a QBLM Platform, a requirements analysis is necessary. By employing both quantitative and qualitative research methods, which include a survey and expert interviews, the requirements for a QBLM Platform are identified. The result of the research is used to define not only the essential features and characteristics of the QBLM Platform but also learning platforms in general.

1. Introduction, Motivation, and Approach

The need to ensure the transparency and comparability of learning outcomes across the European Union (EU) was emphasized during the Bologna Process [1]. The process aims to harmonize study programs of higher educational institutes (HEIs) [1]. A further development is the European Qualifications Framework (EQF), which aims to enhance the transparency, comparability, and portability of qualifications across Europe [2]. It introduces eight reference levels based on learning outcomes, which describe what a learner knows, understands, and is able to do, rather than how or where these skills were acquired [2].
The relevance of such approaches extends beyond HEIs. Enterprises have to incorporate these frameworks to structure internal training programs and align workforce competencies with industry standards [3]. Lifelong learning has become a strategic priority for the digital transformation and labor market shifts [4].
Outcome-oriented learning approaches define explicit, measurable expectations for learners, focusing on demonstrated achievements, rather than the completion of scheduled course hours. Learning outcomes specify what a learner is expected to know, understand, and be able to do after a learning activity. These outcomes are linked to competencies, which represent the ability to apply knowledge and skills. These developments underline the need for learning systems that are structured and outcome-oriented while also being inclusive, demand-driven, and supported via sustainable governance [5]. However, despite the increasing importance of outcome-oriented learning, there is a lack of empirical research that systematically identifies the technical and user-centric requirements for implementing corresponding software solutions.
In consequence, solutions have to be introduced that systematically model, deliver, and assess competencies within digital learning environments while being user-centric and capable of addressing heterogeneous learner demands through adaptable and personalized approaches. To structure the learning process, the competence-based learning (CBL) approach has been introduced [6]. CBL uses competencies to describe learning outcomes, respectively factual knowledge [6]. Learners achieve these competencies by executing learning activities [6]. These learning activities are implemented in learning objects (LOs) [7] such as quizzes, educational games, web pages, and others. Therefore, upon the completion of one or more LOs, the corresponding competencies are linked to the learner, providing a structured representation of their acquired skills and knowledge.
The competence-based learning model proposed by [8] introduced an advanced framework to support CBL. This technical implementation enabled the use of competencies by linking them to learners and learning activities, including the capability for automated assignment. Subsequently, the underlying concept of CBL was further developed into qualifications-based learning (QBL), a term used throughout this text [9].
As described by Wallenborn [10], QBL uses competency qualifications (CQs) instead of competencies. CQs represent the evaluated and validated attainment of skills in professional or interdisciplinary areas, such as methodological, social, and personal skills. Therefore, CQs enrich the concept of competencies, respectively CBL. Like in CBL, the CQs are achieved by completing LOs. CQs are also used as preconditions for modules, courses, or activities. In consequence, only learners owning the required CQs are able to perform a specific LO.
To enable QBL, the Qualifications-Based Learning Model (QBLM) [11] has been introduced. It defines how learning-related data is structured and exchanged between software components, alongside a service-oriented architecture [11]. The service-oriented architecture described in [11] defines services that share common resources—such as the runtime environment and database—to provide functionalities to end users. A prototype of QBLM has been implemented in the Knowledge Management Ecosystem Portal (KM-EP) [12], which offers services that deliver essential functionalities [11]. These include managing and authoring courses, overseeing learner records and LO-related CQ sets, also called competence profiles [11], and maintaining CQ master data, i.e., competence frameworks [11]. Software implementations, i.e., QBLM software within QBLM include, among others, the Course Authoring Tool (CAT), the Competence Profile Manager (CPM), and the Competence Manager (CM) [10].
However, the prototypical implementation of QBLM within the KM-EP still faces some challenges. The prototypical implementations of QBLM and QBLM Software are specifically developed for the KM-EP [10]. Other applications within the KM-EP also provide functionalities unrelated to QBL [10]. Since the KM-EP operates as a single server instance, any configuration change or update to hosted applications could affect other applications or overall functionality [10]. Therefore, it is essential to decouple the QBLM software from its environment to allow QBLM to function independently in various system infrastructures. Additionally, QBLM software should be interchangeable, supporting interoperable alternatives, e.g., using a different implementation of the CM with similar functionality. Another limitation of KM-EP is scalability. The single-server setup restricts the ability to expand resources or provide redundancy, which are critical for high availability.
In consequence, a revised approach is required to make QBLM and the related QBLM software adaptable to any system environment. Such an approach would allow the usage of QBL across diverse organizational contexts, including HEIs and enterprise training environments. A software platform enables this by encapsulating various software components, such as applications, databases, and networks, while abstracting the underlying IT infrastructure [13]. Therefore, any digital ecosystem can be enabled to perform QBL, if the proposed software platform is deployed. As the software platform has to implement the QBLM, it is referred to as the QBLM Platform. The QBLM Platform must provide both QBLM software and necessary resources, such as databases and network gateways, to support these QBL-related functions. To enable the use of QBLM software within the QBLM Platform, a conceptual software architecture is required to design it. However, the requirements for the QBLM Platform and its conceptual software architecture are unknown. To identify the requirements, a structured software-requirement engineering process has to be carried out. Otherwise, inadequate requirements might cause:
cost overruns expensive rework, poor quality, late delivery, dissatisfied customers, and exhausted and demoralized team members [14]
According to Gottesdiener [14], requirement engineering consists of four main phases. These are “Elicit”, “Analyze”, “Specify”, and “Validate”. The phases can be performed multiple times to further improve the product. Within the “Elicit” phase, the requirement source list is created. Therefore, the stakeholders are observed and categorized. In the “Analyze” phase, the scope of the product is defined, and user requirements are identified. The “Specify” phase is used to write down the requirements document. Finally, within the “Validate” phase, the requirements are verified.
As an initial step, this paper seeks to perform a first iteration of the “Elicit” and “Analyze” phases. In consequence, the paper targets three problems. Firstly, the stakeholders for a potential QBLM Platform implementation are not identified, leading to potential gaps in requirements. Secondly, a categorization of stakeholders into user stereotypes is missing, hindering the ability to design user-centered functionalities and interactions for a QBLM Platform. Lastly, the software requirements for a QBLM Platform usable in both HEI and enterprise contexts have not been identified based on user needs, increasing the risk of flawed assumptions in its implementation.
The above-mentioned problems result in the following research questions (RQs):
  • RQ1: Who are the stakeholders of a potential QBLM Platform implementation for HEIs and enterprises?
  • RQ2: What are the user stereotypes for a QBLM Platform implementation for HEIs and enterprises?
  • RQ3: What are the software requirements for a QBLM Platform that meets user needs in both HEI and enterprise contexts?
The research methodology of this paper follows the approach of Creswell’s mixed-methods framework [15], applying a convergent parallel design. This approach combines qualitative and quantitative methods to systematically address the RQs and derive software requirements for the QBLM Platform. In consequence, a focused interview with experts is conducted while simultaneously gathering the opinions of a broader audience. The results of both methods are analyzed independently before being merged for interpretation. Therefore, the paper follows the required steps to perform the described methodology. After the introduction, motivation, and explanation of the approach, Section 2, “State of the Art”, focuses on the theoretical base of the research. Section 3, “Research Design”, describes the conceptualization and planning of the research. In the fourth section, “Research Execution and Results”, the data is collected, and the figures are gathered. In Section 5, “Evaluation and Discussion”, the data is analyzed and interpreted. Lastly, a conclusion and proposed next steps follow in Section 6, “Conclusion and Future Work”.
The study seeks to elaborate on the stakeholders, user stereotypes, and software requirements for a QBLM Platform to enable the planning and software design. While earlier works have proposed QBLM implementations within limited technical scopes (e.g., single-server setups like KM-EP or extensions like QBL4Moodle), this study represents a requirements analysis based on a convergent mixed-methods design, covering both HEI and enterprise contexts. Additionally, the study examines broader aspects of software platforms for learning, making the findings partially applicable to non-QBLM implementations as well. Therefore, the results contribute not only to the specific development of the QBLM Platform but also to the broader field of educational technology and software platforms for learning. By integrating qualitative and quantitative methods, the study ensures that the derived requirements are based on stakeholder needs and expectations. This dual focus supports the development of learning platforms that are not only grounded by a case study but are also well founded through a comprehensive requirements analysis.

2. State of the Art

The second section provides the theoretical basis for the study. The prototypical implementation of systems implementing QBL is presented as a foundation for later stakeholder and user stereotype identification. Additionally, common software platforms for learning are highlighted to derive further potential stakeholders. Lastly, the chosen mixed-methods framework and the research approach is observed.

2.1. Observation on Systems Supporting QBL

In the work of Wallenborn [10], QBL is integrated into the KM-EP. Therefore, he enabled the KM-EP to be the central platform for QBL by providing tools like the CM, CPM, and CAT. Additionally, the integration of the e-Competence Framework, which is a standardized, sector-specific implementation of a competence set, is introduced. In consequence, the named stakeholders are authors, institutions, i.e., university or enterprise staff and learners.
Then, ref. [11] enhanced the implementation of the KM-EP by extending the learning management system (LMS) Moodle to use CQs. The implementation was conducted via a plugin called QBL4Moodle. Furthermore, the integration of the Learning Tools Interoperability (LTI) protocol was introduced. Therefore, the execution of learning paths in the LMS and via external tools with QBL integration is possible. In consequence, users and administrators of the LMS, software developers, tutors, and course designers are additional stakeholders.
Lastly, Srbecky [16] presented a prototype implementation of an educational game and demonstrated its integration into the QBLM. In the educational game—called PAGEL (a German abbreviation for “Psychologische Arbeitsgestaltung erleben”, or, in English, “Experiencing psychological work design”)—students experience a simulation of work tasks. The PAGEL simulation is hosted within the KM-EP and uses the QBLM features. Additionally, it makes use of two additional software components. Firstly, the Didactical Structural Template Manager (DSTM) enables the author to define learning paths in a more abstract approach before exporting them into the CAT. Secondly, the Learning Record Store (LRS) allows PAGEL to send all user interactions into a store for later analysis. Both enhance the concept of QBLM as extensions, which are not essentially part of the core concept. These two components offer additional functionalities to allow PAGEL to implement specific concepts, required for further in-depth analysis. In addition, CQs are utilized to track adaptive play, enhancing the learning experience. Given the integration of these additional components of the primary stakeholders of PAGEL are educators, students, technology developers, and analysts, who evaluate the collected data.
Therefore, the KM-EP contains multiple software prototypes, depending on various underlying libraries and frameworks. Any change, like a software update, could have an impact on other software prototypes, as they may depend on the same library or framework. Furthermore, the software prototypes depend on the databases and logic of each other. In consequence, a tight coupling is the result. Additionally, the software prototypes are hosted on the same virtual machine, which makes independent scaling and distribution impossible.

2.2. Observation on Learning Platforms Without QBL Support

After the observation of systems implementing QBL, common software platforms for learning are observed to derive further potential stakeholders. The following section examines, in an exemplary manner, the learning platforms Udemy, Udemy for Business, LinkedIn Learning, and Trailhead. Udemy, Udemy for Business, and LinkedIn Learning are selected based on their popularity and good reviews [17]. Trailhead is a specialized learning platform for Salesforce, with a focus on gamification [18]. Consequently, only a subset of platforms is examined here to give an overview of features.
Udemy [19] is a learning platform that provides a wide range of courses across various disciplines. It offers the ability to enhance skills to professionals and the possibility of personal development to individuals. Courses are created by independent instructors, which results in variability in content quality. Users can purchase courses individually, while Udemy for Business offers a subscription-based model for employee training for enterprises. Additional features include course ratings, personalized recommendations, progress tracking, and certificates of completion, which serve as participation confirmations [20].
LinkedIn Learning [21] targets professionals aiming to develop job-relevant skills. Courses are curated by industry experts, ensuring high-quality content. The platform operates on a subscription basis, granting users unlimited access to its course library. Features include course search, progress tracking, personalized learning paths, and integration with the network. Certificates earned can be displayed on user profiles, increasing professional visibility.
Trailhead [22] is a specialized platform designed for Salesforce professionals. It employs gamification elements, such as badges and super-badges, to incentivize learning. The platform offers structured learning paths that prepare users for Salesforce certifications. Community interaction, discussion forums, and integration with Salesforce’s ecosystem further enhance its usability. Unlike Udemy and LinkedIn Learning, Trailhead is free to use, with costs only incurred for official certification exams.
The three platforms share several functionalities, including progress tracking, certificate issuance, and mobile accessibility. Udemy and LinkedIn Learning allow personalized content recommendations, while Trailhead focuses on gamification and community-driven learning. Udemy for Business and LinkedIn Learning support corporate training through administrative tools and HR system integrations. Table 1 visualizes the comparison of the mentioned learning platforms based on the previously presented features and literature sources.
In consequence, learners, content creators, corporate clients, and platform administrators are identified as potential stakeholders of the learning platforms. The definition of learners expands to also now include professionals seeking career advancement and hobbyists. Content creators develop courses to provide educational value. Corporate clients, such as enterprises and HR departments, utilize the learning platforms for employee training and workforce development. Finally, platform administrators ensure technical functionality, user experience, and compliance with industry standards.
The observation of learning platforms with and without QBL support is initially completed. Consequently, the potential stakeholders for both QBL-based and general learning platforms include learners, content creators, institutions, corporate clients, platform administrators, and regulatory bodies. Learners encompass students, professionals, and hobbyists seeking skill development. Content creators range from independent instructors to academic course designers or authors. Institutions, including universities and enterprises, utilize these platforms for structured learning and workforce training. Corporate clients, such as HR departments, use the learning platforms for their employee development programs. Platform administrators, including LMS managers and software developers, ensure technical functionality, user experience, and compliance. Lastly, regulatory bodies ensure reliability in digital learning.

2.3. Observation of a Research Approach to Evaluate Requirements

The results are utilized to design the study in order to derive the requirements. The design of the study requires an appropriate structure. Therefore, the commonly used approaches of [15] are observed. Among the commonly used approaches identified, the Convergent Mixed Methods Design was selected. This choice enables the use of data from both quantitative and qualitative methods, ensuring a comprehensive understanding of the requirements. By merging numerical trends with deeper contextual insights, this design enhances the validity and applicability of the findings, supporting a more holistic development of the QBLM platform. In the Convergent Mixed Methods Design approach, both data sets (quantitative and qualitative) are collected independently, ensuring that each method retains its methodological rigor. After analysis, the findings are compared to determine convergence, divergence, or complementarity. The approach is useful when a research problem requires both broad statistical generalizability and in-depth contextual understanding.
Consequently, both quantitative and qualitative data collection methods must be designed. For the quantitative data collection, a questionnaire needs to be designed. The questionnaire uses standard questionnaire approaches like Likert scales, as well as single-choice and multi-choice questions [23]. In regards to the qualitative data collection, expert interviews are used. To conduct them, a structured interview design, including a well-defined interview protocol, is required [15].
The previous section created the baseline for the design of the quantitative and qualitative data collection by observing learning platforms with and without QBL support and the investigation in research methodologies. The next step is to design the study’s procedure.

3. Research Design

Section 3 outlines the study design, detailing the procedures chosen for data collection. It models the convergent mixed-methods design that was applied. Therefore, the section is structured into four parts. First, the user stereotypes are identified to define the target audience. Next, the general structure of the study is described. Thirdly, the design of the questionnaire is illustrated. Lastly, the design of the expert interview and its protocol is introduced.

3.1. Definition of Potential User Stereotypes

Derived from the identified stakeholders in the second section, five user stereotypes are specified, which cluster the users of the QBLM Platform. These user stereotypes are classified based on the identified potential stakeholders. In the following, they are described. Authors create and structure learning content, ensuring educational value. Learners engage with courses for skill development and progression. Operators/managers oversee learning initiatives, aligning them with organizational goals. System administrators handle technical infrastructure, user management, and security. Editorial staff focus on data protection, legal compliance, and content moderation. Table 2 shows the user stereotypes and the corresponding stakeholders.

3.2. General Design of the Study

The general structure consists of four sections. At the beginning of the general structure is an introduction to frame the objectives and the QBLM Platform’s idea. Secondly, the requirements for the a QBLM Platform are observed. This section consists of five subsections, which derive the data for the functionalities, use cases, and technical modalities, respectively, supporting devices, functional requirements, and non-functional requirements. Within the functionalities segment, the expectations for functionalities on the QBLM Platform are observed. In the use case subsection, relevant use cases are noted to understand the expected main user flows. Afterwards, the technical modalities and devices that users prefer when interacting with the QBLM Platform are examined. The last two subsections examine the functional and non-functional requirements of the QBLM Platform. While the functional requirements delineate the specific features and capabilities needed for user interaction and platform operation, the non-functional requirements focus on performance, security, scalability, and overall system reliability. The third section refers to the demands of the configuration of the QBLM Platform. This section includes four subsections, which are software components, standard features, individualization, and the hosting environment. Firstly, the study examines the necessary software components and applications—such as chat apps, forums, and messaging systems—which, although not the core elements of the QBLM Platform, are potentially required on a learning platform. Next, the study derives the demands for standardization and individualization. The last part of the section obtains the required hosting environment to derive the systems’ limitations. In the fourth section, the senses and obstacles are researched. Therefore, the study participants have to provide their opinions and justify them. Figure 1 visualizes the described structure.

3.3. Design of the Quantitative Research

Based on the general structure, the questionnaire is designed to perform quantitative research. Before describing the questions, some preconditions must be defined. The survey is conducted as an online questionnaire, as this method allows wide accessibility, efficient data collection, and ease of analysis. To ensure inclusivity, the survey is provided in both German and English, making it accessible to a larger group of participants. The wording of both the survey text and the questions must be created carefully to ensure all respondents understand them, even if they are new to the topic of QBL. Additionally, demographic questions are included to gain insights into the participants’ backgrounds and ensure a meaningful segmentation of the results. Although the survey is primarily designed for quantitative research, it includes separate open-ended questions that allow participants to share additional ideas and comments. Furthermore, the open-ended responses are analyzed separately, ensuring that the quantitative results remain clear and consistent with the overall methodological framework. A consent section and explanatory text are provided at the beginning, informing participants about the purpose of the study and their rights. Finally, the survey length is kept concise to prevent dropouts and ensure a high completion rate.
With the preconditions established, the design of the questionnaire can follow. The questionnaire contains the four parts of the general structure, plus two extra sections to determine participants’ user profiles and collect demographic data. The questionnaire includes one of the additional sections after the introduction and another one at the end. Each section is represented with a page featuring multiple questions that are related to it. The complete structure of the questionnaire, including the questions (English version), is available in Appendix A and its subsections.

3.4. Design of the Qualitative Research

In order to perform the qualitative research, the interview needs to be designed, too. A structured expert interview is chosen as the qualitative research method because it offers in-depth insights. Engaging experts in a conversational format enables the exploration of complex issues and the collection of insights to shape the QBLM Platform’s requirements. The interviews are operationalized virtually and are recorded to allow a later transcription.
While the survey is structured to quantifiable data through fixed answer options, the expert interview follows a more open format. The participants are asked to provide detailed explanations underlying reasons behind their perspectives. The interview follows the general structure, too. Additionally, the interviewees are asked to provide a vita after the introduction and sort themselves into user stereotypes. The interview concludes with thanks for the participant’s contributions. Overall, the interview is designed to last between 45 and 60 min.
The interview begins with an introductory phase where the expert is greeted, the purpose and process of the interview are explained, and permission to record the session is obtained. This phase also includes brief introductions by both the interviewer and the interviewee, during which the expert’s professional background and experience are outlined. After establishing understanding, the conversation moves into a warming-up phase in which the expert is asked to describe their affiliation with the QBLM Platform concept, clarifying which user stereotype they represent. This sets the context for the discussion that follows.
Afterwards, the main phase of the interview starts. Following the general structure, the experts are requested to discuss which functionalities are essential for the QBLM Platform. They then elaborate on relevant use cases, sharing insights into practical scenarios such as course creation, progress monitoring, and qualification assignment. The discussion continues with an exploration of the technical modalities and devices that suit interaction with the QBLM platform. What follows is an examination of non-functional requirements like performance, usability, and security, as well as critical functional requirements. The interview then shifts towards configuration topics, during which the expert discusses customizable software components, the balance between standardization and individualization, and the optimal hosting environment for the QBLM Platform. Finally, the interview investigates the sense of creating a QBLM Platform and potential obstacles related to its creation. The experts are asked to reflect on its value and the challenges that might block its successful implementation. In the conclusion, the expert is given an opportunity to share any additional thoughts or insights that are not covered during the main discussion.
After defining the user stereotypes and designing the questionnaire and interview plan, the convergent mixed-methods design for the study is finalized. The questionnaire captures structured, statistical insights, while the interviews provide deeper perspectives. Both data sets will be produced independently and in parallel. For the comparison, the findings will be merged in the evaluation phase, identifying alignments, discrepancies, and complementary insights.

4. Research Execution and Results

This section describes the execution of the study. As both quantitative and qualitative research are performed separately, the section is also divided into two separate sections. The first section describes the quantitative research, respectively survey implementation. Therefore, its technical and organizational aspects are illustrated. Secondly, the execution of the quantitative research, respectively the expert interviews’ implementation, is outlined.

4.1. Execution of Quantitative Research

To perform the survey, an appropriate software tooling is required. Therefore, the open-source software Limesurvey provided by the distance-learning university in Hagen [24] is used. Within the software, the introduction, legal texts, and described pages, including the questionnaire, are implemented. To access the survey a link is generated, which can be shared. The software allows the survey to be conducted with desktop computers and mobile devices. However, the visualization is optimized for desktop computers’ resolution. The link to the survey was shared via multiple channels like social media (e.g., LinkedIn) and the survey pool of the distance-learning university in Hagen. A total of 160 usable and complete survey responses were collected between 3 October 2024 and 3 December 2024.

4.2. Execution of Qualitative Research

The expert interviews are accomplished via video telephony software (Zoom). The interview partners’ selection is based on the identified user stereotypes and the demand to have a heterogeneous set of interviewees. Table 3 gives an overview of the interviewees. The table provides identifiers (used for references in the continuous text), a concise description of each individual’s expertise and role, their professional context (HEI or enterprise), and the corresponding user stereotype assigned. The classification of the experts took place during the interview, together with the experts. The interview was recorded, and a transcription was generated afterwards.
The data from both the quantitative and qualitative research were collected and processed for further analysis. With the completion of data gathering, the study now transitions to the evaluation phase, where insights will be extracted, patterns identified, and findings interpreted to answer the RQs.

5. Evaluation and Discussion

The evaluation and discussion phase focuses on analyzing both quantitative and qualitative findings. This step intends to identify key insights, compare trends, and interpret results to address the RQs. By integrating a statistical analysis, the study ensures an understanding of user needs and platform requirements. The section is structured into three sections: first, the survey is evaluated, followed by an analysis of the expert interviews, and finally, both findings are combined for an interpretation.

5.1. Evaluation of Quantitative Research

Two hundred and sixty-five participants started the execution of the survey. Out of these, 160 participants concluded the survey. There were 68 (42.5%) female participants, 86 (53.75%) male participants, and 6 (3.75%) gender-diverse participants. In regards to the educational qualifications, most participants held a master’s degree (29.38%), followed by a bachelor’s degree (25.63%). About 13.13% had a university entrance qualification, 10% completed vocational training, and 7.50% had a doctorate. Regarding age, 5.63% were 18–24, 16.25% were 25–34, 22.50% each were 35–44 and 45–54, 25.63% were 55–64, and 7.50% were 65 or older. Mainly, the respondents were employees/workers (93/58.13%), followed by students (30/18.75%), pensioners (18/11.25%), and the self-employed (9/5.63% obtained from “other” section). Five teachers accounted for 3.13%, two unemployed participants for 1.25%, and 1.88% (three) selected “other.” No one was classified as a pupil, lecturer/professor, or trainee.
Compared to the German population (used as a reference since the study was conducted in Germany), the proportion of male participants is elevated [25], the educational level of participants is significantly higher than the average [26], and both young and old age groups are under-represented (compared to [27]: 18–24: 9%; 25–34: 15%; 35–44: 16%; 45–54: 15%; 55–64: 19%; 65+: 27%). This demographic distribution is likely due to the high number of participants affiliated with the distance-learning university in Hagen. Although the study participants do not fully represent the population, they reflect the assumed core user group, and the study is therefore relevant to the design of the system. This assumes that the QBLM Platform will be primarily used by adult learners in HEI and enterprise environments. Nevertheless, future studies should include a more demographically diverse sample to further validate broader usability and accessibility.
The participants were asked to classify their user stereotypes. They were allowed to choose multiple options, and 84% classified themselves as learners, which is by far the biggest group and underlines the main purpose as a learning platform. In addition to the given groups, six employees (3.75%) classified themselves as “System Developers” via the “other” text field. They were included as a separate category, and these participants were subtracted from the “other” group. As a result, the needs of system developers should also be considered in the system’s design and development. Since the “unemployed” (2) and “other” (3) categories each included very few participants, they were excluded from the final evaluation of user stereotypes. Figure 2 visualizes the distribution of user stereotypes. The x-axis represents different professions, while the y-axis represents the percentage of users within each profession who fit into different user stereotypes. In the legend, the total number of participants for each profession and user stereotypes are shown in brackets. All students and most of the employees, self-employed professionals, and pensioners classified themselves as learners. Only teachers here expressed a significantly lower identification. Teachers mainly classified themselves as authors and editorial staff, which suggests that teachers are more involved in content creation than in consuming learning content. Self-employed professionals have a higher classification rate among both authors and editorial staff, too. For other user stereotypes, no significant correlation with specific professions is evident.
Next, the identification of the main functions of the QBLM Platform follows. Figure 3 shows the result of the Likert scales. For each functionality, one row shows the consent of the participants to include that functionality in the QBLM Platform. The diagram represents the distribution of the 160 participants using percentages for visualization. It is observed that there is a consensus for all functionalities except the ability to use the QBLM Platform for learning in person.
Therefore, the QBLM Platform has to provide the expected management functionalities, such as maintaining users and CQs. Furthermore, a support functionality such as a help desk, as well as the creation of learning content, should be offered. The available QBLM Software LMS, CAT, CM, and CPM are confirmed to be relevant. In addition, the QBLM Platform should be extended or should allow interoperation with user management and support systems. Learning in person was not considered relevant, which likely does not reflect a general rejection of in-person learning but, rather, its misalignment with the implied digital and systemic nature of CBL and the QBLM Platform. Users assume the QBLM Platform as a tool to support, manage, and track online learning. In consequence, the applicability of CBL for in-person learning has to be evaluated and outlined.
Additionally, the participants were asked to share additional ideas for relevant functionalities. One idea was a seamless LMS integration through APIs and external interfaces while maintaining system stability. Participants mentioned features including learning and content delivery, supporting self-assessments, progress tracking, and personalized recommendations. A further suggestion was customization, allowing learners to adapt the UI to their needs. Motivation and engagement were also highlighted, with features like learning strategies and interactive elements. Participants also highlighted the importance of feedback systems. Analytics was another key aspect, providing insights into learning progress. Some suggested community and collaboration features to facilitate peer exchange, while others proposed time management tools to support structured learning. These suggestions indicate a strong user demand for a more integrated, personalized, and engaging learning experience. In consequence, the future development of the QBLM Platform should prioritize interfaces and features that improve feedback and collaboration.
As a next figure, the most relevant use cases are observed. Each respondent could select up to three preferred use cases. The most frequently chosen use case was participation in online learning modules, selected by 105 participants (65.63%). This was followed by monitoring learning progress, chosen by 78 participants (48.75%), and providing learning materials, which 71 participants (44.38%) found relevant. Additionally, 65 participants (40.63%) identified conducting examinations and tests as a key use case. Supporting and supervising learners was selected by 48 participants (30%), while creating and managing courses was considered important by 36 participants (22.5%). Less frequently mentioned use cases included assigning competencies and qualifications (28 participants, 17.75%), participation in in-person events (19 participants, 11.88%), and the integration of external content and tools (13 participants, 8.13%). The least selected use case was generating reports and analyses, with 12 participants (7.5%) considering it relevant. The results emphasize a strong focus on participation in online learning and progress tracking, while administrative and analytical functions were of lower priority to the respondents. In consequence, ensuring the seamless integration and high usability of the LMS or the platform component responsible for offering learning content should be prioritized.
Afterwards, the technical modalities and devices relevant for users to operate the QBLM platform were examined. Therefore, the participants were asked whether they would consider using the QBLM Platform via various technical devices. Figure 4 visualizes the results of the given Likert scale in percentages. The results indicate that desktop/notebook devices are the most favored option, with 92% of participants strongly agreeing with their use. Tablets also received strong support, with 59% strongly agreeing and 18% agreeing, making them the second most preferred device. Mobile devices/smartphones were widely accepted as well, with 36% strongly agreeing and 15% agreeing, though 17% disagreed and 10% strongly disagreed, indicating some hesitation.
Print/books and audiobooks show a balanced response, with 29% strongly agreeing in both cases, while 23% and 26% of participants, respectively, remained neutral. Videos were positively received, with 49% strongly agreeing and 21% agreeing, making them the most favored digital format. On-site/in-presence learning was associated with a mixed perception, with 16% strongly agreeing, while 30% remained neutral, and 18% disagreed. In contrast, virtual reality (VR) and smartwatches were met with high levels of rejection. While 39% strongly disagreed with VR, a significant 64% strongly disagreed with using smartwatches for the QBLM Platform. In consequence, traditional digital devices (desktops, tablets, and mobile devices) are the preferred modalities, while emerging technologies such as VR and smartwatches face resistance among users. This can be utilized to concentrate the development on these technical modalities.
Additionally, participants provided further suggestions in regards to accessing the QBLM Platform. One idea was offline usage. Another proposal was support for gaming consoles—such as the Nintendo Switch—as a potential learning device. Accessibility was also highlighted, with a suggestion to ensure usability for blind and deaf users. Health considerations were raised, recommending guidelines on ergonomic risks. Further technical enhancements included integrating video conferencing tools and a virtual whiteboard to support collaboration. Artificial intelligence-based learning was another key theme, with suggestions for adaptive training that adjusts to users’ demands. Participants also proposed e-book reader compatibility for longer texts. Furthermore, compliance for corporate computers, including data security policies, was requested. Therefore, the QBLM Platform should support diverse access options, ensure accessibility and corporate compliance, and incorporate AI-driven personalization and collaborative tools to better align with user-centric needs. These requirements could be addressed through additional software components extending the QBLM Platform, which confirms the demand for a modular and flexible approach.
In the next section, the participants were asked to select up to three non-functional and three functional requirements that they considered essential. The results highlight the key priorities for platform design and implementation.
Among the non-functional requirements, usability (71.88%) emerged as the most important factor. This was followed by performance (48.13%), availability (43.75%), and data security and privacy (35.63%). Other aspects include costs (usage costs as a learner) (32.50%), which were more relevant than costs related to operating expenses and investments (8.13%). Accessibility (15.00%), backup and recovery (13.75%), scalability (10.63%), and regulatory compliance (9.38%) were assigned lower overall importance. Consequently, the focus has to be placed on ensuring an intuitive and user-friendly interface, fast performance with minimal downtime, and robust data security measures, as these aspects are the primary concerns for users.
Regarding functional requirements, the most important feature was the integration of learning resources (54.38%). Creating and managing courses (40.63%), collaborative features (38.13%), and conducting exams and assessments (36.88%) were other key priorities. Analysis and reporting tools (26.25%) and qualification management (23.13%) were recognized as important for tracking learning outcomes and managing competencies. Integration with external systems (18.13%) and user and role management (15.63%) were less essential. Therefore, the QBLM Platform has to prioritize the integration of learning resources, course management, collaboration tools, and exam functionalities, as these are the most critical functional requirements identified by users.
In the next question, the ability to extend the QBLM platform was examined. Participants were asked to evaluate various extension features, including integration for external tools, notification systems, authorization systems, user analytics, educational games, forums, and chat functions. Figure 5 illustrates the results.
The highest level of agreement was observed for notification systems (69% agree), followed by forums (77% agree) and integration for external tools (64% agree), emphasizing the importance of communication and system integration features. Educational games (58% agree) and user analytics (53% agree) were also well received. Authorization systems received 54% agreement. Chat functions had the highest disagreement (22%), despite 53% agreement, reflecting divided opinions on its necessity. Consequently, the results indicate a strong demand for notification systems, forums, and external tool integration. While educational games, user analytics, and chat functions surpass 50% agreement, they remain important but should be prioritized after higher-demand features in the development process. However, these results show that user demands are heterogeneous. Therefore, the QBLM Platform has to support modular extensibility to allow the selective usage of features.
As an additional input, the participants suggested focusing on QBL core topics while integrating established LMS platforms (e.g., Moodle). Platform independence, using modern programming languages and community-driven development, was also emphasized. Improving communication was a key theme, with calls for a unified platform channel, additional electronic communication methods, and supervised learning with qualified feedback and tailored tasks. Other proposals included a calendar with import/export functions, visible learning progress tracking, and documentation of learner activities.
Next, the participants were asked to identify which areas of the QBLM platform should be standardized and not individualized and which areas should allow for personalization. The participants were allowed to select up to three options.
In regards to standardization, the strongest consensus was observed for security and privacy policies (73.75%), followed by authentication and authorization (55.63%) and data management (storage and processing) (48.13%). User management (40.00%) and interfaces for external systems (33.13%) were also seen as areas requiring standardization. The course authoring tool (26.25%) had the lowest preference for standardization, suggesting users see value in customization.
The highest agreement for personalization was observed in learning content and methods (67.50%) and user interface and design (65.63%). The course authoring tool (43.75%) also received notable support for customization. Other areas, such as analytics and reporting (33.13%), user management (e.g., roles and permissions) (31.88%), and the integration of external tools and resources (30.63%), were seen as moderately important for personalization.
These results suggest that security, authentication, and data management should remain fixed and standardized, while content, interface design, and course authoring should be customizable to meet user demands. Therefore, the QBLM Platform has to enforce standards in core architecture to ensure reliability and compliance while enabling flexible customization in pedagogical and user-facing components to support user preferences. Figure 6 visualizes the results of the questions regarding the standardization and personalization. The figure visualizes the consensus on standardizing or personalizing the areas in percentages.
In the next section, the participants were asked to evaluate different IT infrastructures for hosting the QBLM platform. Figure 7 illustrates the results. The highest level of agreement was observed for the organization’s own data center, with 44% strongly agreeing and 19% agreeing, indicating a strong preference for in-house infrastructure and data control. Local servers (on-premise) also received 36% strong agreement and 19% agreement, emphasizing the preference for self-managed hosting solutions. Personal devices were widely accepted, with 49% strongly agreeing and 11% agreeing, highlighting the demand for accessibility across PCs, notebooks, and smartphones. For cloud deployment, opinions were more divided: while 36% strongly agreed and 16% agreed, disagreement was also high (17% disagree, 11% strongly disagree), indicating concerns regarding the cloud. Hybrid solutions received 21% strong agreement and 26% agreement, with a notable 24% neutral response, suggesting uncertainty. Virtual machines (VMs) received 19% strong agreement and 14% agreement but also the highest neutral (26%) and disagreement (27%) responses, indicating mixed opinions on their relevance for the QBLM Platform. These results suggest a strong preference for on-premise and data center-based solutions, while cloud and hybrid approaches should be considered.
Afterwards, the participants were asked whether they found it valuable to develop a QBLM Platform to make qualification-based learning accessible to a broader audience. The results, shown in Figure 8, indicate support for the implementation of such a platform. However, several concerns were raised regarding its practicality and feasibility. Some participants doubted whether the platform would be realistic and effectively implementable, questioning its practical benefits. In terms of future relevance, some respondents challenged the necessity of developing a new platform, arguing that the existing learning platforms already cover many needs. Additionally, there was criticism suggesting that the platform might overemphasize technical and subject-specific knowledge while neglecting important soft skills. Finally, concerns were expressed regarding implementation and resource allocation. Participants shared warnings about high costs, long development times, and potential resource waste, emphasizing the need for careful evaluation and large-scale testing before full implementation. While there is strong overall support (94%) for a QBLM platform, these concerns highlight the need for evaluation, differentiation from existing platforms, and resource planning.
Finally, the participants were asked to provide additional comments regarding the survey. They emphasized the need for realistic goals, prioritization, and leveraging existing LMS solutions (e.g., Moodle) for integration. A clear differentiation from other platforms and a unified, user-friendly system were highlighted as important for better navigation and accessibility. A well-defined didactic concept was requested, including personalized learning paths, non-digital learning resources (books, seminars), and real-time progress tracking without additional exams. Participants named simplified digital library access and intelligent writing assistance to detect common errors. Furthermore, expanding accessibility for older users, ensuring motivating and friendly communication, and actively involving users in development were key concerns. Lastly, participants requested clear explanations of the platform’s purpose and the specific problems it aims to solve. Some aspects were already mentioned in other segments of the questionnaire. However, they have to be utilized to enrich the research, too.

5.2. Evaluation of Qualitative Research

The expert interviews followed the same general structure as the quantitative research. To ensure readability and consistency, this evaluation follows a structure similar to the previous section. However, due to the extensive content of the interviews, a full transcription is not provided. Instead, this subsection presents a summary, highlighting the key aspects and main insights derived from the discussions. The experts and their vitae are already presented in Table 3. For better readability, expert identifiers are used instead of full names in the following text.
Part of the interview also discussed the classification of user stereotypes, which was included in the results. Derived from this classification, experts from HEI match roles such as author, learner, and system administrator. In contrast, professionals from enterprises, who typically manage operational and strategic responsibilities, are mostly classified by stereotypes like operator/responsible/manager, editorial staff, or system administrator.
Regarding core functionalities, experts proposed various features aligned with their respective user stereotypes. E1 proposed competency profiles, learning paths, and analytics tools to ensure data-driven learning progress tracking. E2 emphasized advanced search functions, intuitive UI, and resource management, enhancing platform usability and efficiency. E3 recommended modular architecture, event-driven processes, and microservices, enabling scalability and adaptability. E4 highlighted automated reports, simple navigation, and structured dashboards, optimizing managerial oversight. E5 suggested interactive dashboards and real-time progress visualization, reinforcing user engagement. E6 encouraged clearly structured competency profiles and self-directed learning tools, supporting learner autonomy. E7 stressed qualification tracking and fraud prevention mechanisms, ensuring data integrity and certification security. In consequence, the QBLM Platform should integrate intelligent competency tracking, advanced search capabilities, modular system design, and real-time analytics, ensuring a seamless user experience across different learning environments.
For the topic use cases, the experts outlined practical applications of the QBLM platform, aligning them with organizational and educational needs. E1 identified competency profile creation and analysis as key to structuring qualification pathways. E2 focused on employee training and resource planning, enhancing workforce development strategies. E3 stressed the importance of course and exam integration within university systems. E4 pointed out qualification assignment and tracking, ensuring compliance in professional settings. E5 highlighted learning path creation and certification management, particularly relevant in regulated industries. E6 advocated for course standardization and comparability, improving learning consistency. E7 emphasized fraud detection and adaptive algorithms, ensuring fairness and quality control. Therefore, the QBLM Platform should cater to both structured academic pathways and corporate training environments, with features supporting certification, regulatory compliance, and qualification assignment.
Next, the analysis of expert opinions on technical modalities is presented. E1 emphasized a focus on mobile devices (smartphones, tablets) to facilitate on-the-go learning. E2 preferred desktop environments with strict security and high processing capabilities. E3 emphasized LMS integration (e.g., Moodle) and containerized deployment, ensuring flexibility and scalability. E4 suggested web-based, cross-device compatibility, maximizing accessibility. E5 recommended integration into existing IT infrastructures, reducing implementation costs. E6 stressed the need for platform independence, enabling flexible learning environments. E7 emphasized mobile and browser compatibility, ensuring ease of access. The results indicate a preference for various technical modalities, including mobile, desktop, and web-based access, to ensure flexibility.
Regarding non-functional requirements, experts highlighted data security, data compliance, system maintainability, and accessibility as essential. E1 stressed the importance of data compliance, accessibility, and data security, while E2 emphasized IT security, backups, and controlled access to maintain platform reliability. E3 focused on sustainable system reliability and optimization of computational resources. E4 highlighted cost efficiency and minimal maintenance as critical for corporate environments. E5 pointed out a user-friendly interface and process stability. E6 highlighted a balance between system complexity and ease of use. Lastly, E7 emphasized the need for qualification accreditation and validation mechanisms, ensuring alignment with educational and industry standards.
For functional requirements, experts outlined necessary features to support adaptive learning and structured assessments. E1 proposed automated analyses and testing tools to enhance personalized learning feedback, while E2 emphasized customizable competency profiles with automated progress evaluation. E3 recommended a modular system structure that supports varied learning formats and adaptive workflows. E4 and E5 both highlighted the need for progress tracking, certification automation, and dynamic learning dashboards, ensuring a comprehensive learning experience. E6 focused on flexible course structures to accommodate different learning methodologies. E7 encouraged for question banks and automated reminders to reinforce continuous learning habits.
In terms of system components, experts proposed architectural elements to enhance platform scalability and usability. E1 and E2 recommended competency management modules with performance tracking, while E3 voted for a microservices-based modular architecture to ensure scalability and adaptability. E4 and E5 suggested intuitive administrative interfaces and adaptive learning features, maximizing accessibility for different user groups. E6 emphasized competency analysis and visualization tools, enabling a data-driven approach to learning progress tracking. E7 focused on structured content management to ensure updates and long-term sustainability. Standardization was another aspect discussed with the experts. E1 stressed the need for standardized data formats and API interfaces, allowing seamless integration with external systems. E2 and E3 emphasized cross-institutional interoperability, ensuring that universities and companies can share common learning structures. E4 and E5 pointed out the necessity of standardized certification and reporting mechanisms, facilitating qualification recognition across industries. E6 and E7 supported harmonized training structures and competency frameworks.
On the other hand, experts highlighted the need for personalization and customization to match individual learning needs. E1 suggested individualized learning paths based on skill levels and user preferences, while E2 proposed filtering options for targeted content access. E3 and E4 emphasized flexible course configurations and adaptive system components, ensuring an individual learning experience. E5 and E6 stressed customizable dashboards and user progress tracking, making learning more engaging and structured. E7 recommended reminder functions to reinforce continuous learning habits and motivation.
Experts emphasized the need for a cloud infrastructure to ensure scalability and reliability. E1 and E2 recommended cloud solutions with scalability and adaptability to different organizational needs. E3 highlighted the importance of cross-organizational cloud systems. E4 suggested lightweight, scalable solutions with minimal maintenance requirements, reducing technical overhead for administrators. E5 focused on optimized processes for pharmaceutical training and documentation, ensuring compliance with industry regulations. E6 stressed the need for universal accessibility, allowing all users to engage with the platform, regardless of location or device. Lastly, E7 emphasized adaptations for medical education, ensuring that the platform meets specialized training requirements. These findings indicate that the QBLM Platform should adopt a cloud approach with modular adaptability, ensuring scalability, accessibility, and industry-specific flexibility across diverse educational settings.
Finally, the experts were asked whether the implementation of the QBLM platform is necessary. All experts supported the idea, recognizing its potential for structured learning, competency tracking, and assessment automation. E1, E3, and E5 strongly advocated for the platform’s importance in education and research, while E2 and E6 agreed with the implementation but stressed the need for strategic planning and system integration. E4 and E7 stated concerns about cost efficiency and differentiation from existing learning platforms.

5.3. Evaluation of Combined Results from Qualitative and Qualitative Research

This section combines and compares the results of both the quantitative survey and the qualitative expert interviews, following the convergent mixed-methods design. In consequence, a holistic perspective on the requirements for the QBLM Platform is created. While both research streams largely converge on the platform’s core requirements, certain discrepancies appeared. Again, the section follows the general structure of the research.
First functionalities and use cases are observed. The survey results indicate support for functionalities such as online learning modules, progress tracking, and the integration of learning resources. Experts reinforced these findings, stressing the importance of competency profiles, modular course management, and dynamic dashboards that facilitate adaptive learning. This convergence underscores the necessity of a user-friendly interface with robust tracking and management features. Therefore, the QBLM Platform has to provide a consistent, intuitive design and has to offer easy navigation, i.e., the use of diverse resources.
In regards to the technical modalities and devices, survey respondents demonstrated a strong preference for traditional digital devices while expressing hesitation toward mobile devices. In contrast, experts emphasized the necessity of mobile and web-based access, referring to accessibility and the ability to learn anywhere. This divergence suggests that users currently prefer familiar modalities, but the QBLM Platform should incorporate a mobile-compatible interface and responsive design to be future-proofed. In consequence, the QBLM Platform must be flexible enough to deliver its functionality across various technical modalities (e.g., using web technologies) while also interoperating with existing systems such as LMSs or other learning platforms (e.g., LinkedIn Learning or Udemy).
Next, functional and non-functional requirements were observed. Survey participants rated usability, performance, and data security as top non-functional requirements. Experts complemented these findings by emphasizing the need for standardized data formats, secure APIs, and cross-institutional interoperability. This alignment suggests that the QBLM Platform must prioritize an intuitive design and rigorous security protocols to ensure both ease of use and data integrity.
In regards to software components and extensions, both data sets advocate for a system architecture that is scalable and flexible. The survey results highlight a demand for the integration of additional on-demand features. Experts elaborated on this, specifying the need for a modular, microservices-based design. This synergy confirms that modularity is central to accommodating diverse user needs. The QBLM Platform architecture has to take this finding as a foundational principle for its design.
In the survey, users leaned toward standardizing core elements such as security, authentication, and data management. Meanwhile, they prefer personalization in learning content and interface design. Experts also value standardization for interoperability, emphasizing the potential of individualized learning paths and flexible dashboards as critical for engagement. The difference shows the need for a balanced approach that ensures a consistent implementation while still enabling personalized user experiences. Therefore, the QBLM architecture must not only define an approach to hosting functionality but also establish clear rules and interfaces.
The hosting environment is the next topic. Survey respondents exhibited a marked preference for on-premise and data center-based solutions for enhanced control and data privacy. Conversely, expert interviews leaned toward cloud solutions, citing scalability, maintenance efficiency, and the flexibility required to support a growing user base. This discrepancy implies that the final system design might benefit from a hybrid hosting strategy or has to be compatible with both on-premise and cloud. In consequence, the solution must introduce an abstraction layer between the QBLM Platform and the underlying infrastructure to enable deployment across various scenarios.
Both the quantitative and qualitative research participants largely agreed on the importance of developing a QBLM Platform. However, experts and survey participants expressed concerns regarding the practical implementation. Some participants doubted whether the platform would be realistic and effectively implementable, questioning its practical benefits. Others emphasized the necessity of clear differentiation from existing learning platforms. Concerns were also raised regarding high costs, long development times, and potential resource waste, emphasizing the need for careful evaluation before full implementation. Therefore, a prototypical implementation of the QBLM Platform should be developed and evaluated through pilot projects in both higher education and enterprise environments.
Overall, the combination of quantitative and qualitative results reveals strong convergence on the importance of core functionalities, user-centric design, and a modular system architecture for the QBLM Platform. At the same time, discrepancies in technical modality preferences and hosting environments highlight the need for a flexible QBLM Platform design. Unlike prior research that either focused on technical modeling or prototypical implementations, this study examines stakeholder needs in diverse aspects. The combination of the qualitative and quantitative approaches offers a basis for future platform development, fostering alignment between pedagogical goals and technical implementation.

6. Conclusions and Future Work

The study presented in this paper has explored the requirements for a QBLM Platform using a convergent mixed methods design approach. By integrating qualitative and quantitative data, it has identified key stakeholders, user stereotypes, and functional, as well as non-functional, requirements necessary for developing a QBLM Platform suitable for both HEIs and enterprises. These findings provide a foundation to implement scalable systems facilitating QBL. Furthermore, the results can be used for learning systems in general.
The research was guided by three RQs. This section follows that structure by addressing each research question individually, providing answers based on the study’s findings.
RQ1 asked for the stakeholders of a QBLM Platform. The study identified a range of stakeholders who would interact with or benefit from the QBLM Platform. Stakeholders were identified throughout the study, helping expand the initial observations. In Table 4, an overview of the identified stakeholders is presented. The columns “Stakeholder Category” and “Role/Impact” were added for better readability and understanding.
Therefore, the identification of stakeholders is provisionally completed. However, the list is expandable and certainly not exhaustive. As a remaining challenge, the classification of stakeholders is pending. They can be classified as external or internal stakeholders. Another classification is the Portfolio Matrix [28], which maps stakeholders based on their interest and influence within a coordinate system. This visualization helps systematically assess their importance, making it easier to identify those stakeholders who have the greatest impact and require the most attention.
User stereotypes are also identified through the study to serve RQ2. They not only categorize stakeholders but also define distinct user classes based on their characteristics, behaviors, and interactions with the system. The identified user stereotypes help identify functionalities and design the system, ensuring that the QBLM Platform implements the expected features. After conducting the survey, the study identified six primary user stereotypes: system developer, editorial staff, system administrators, operator/responsible/manager, author, and learner. Through quantitative and qualitative research, these stereotypes were validated as a preliminary valid set. A remaining challenge is the further refinement of user stereotypes, including the definition of specific use cases for each.
RQ3 asked for the requirements for the QBLM Platform. During the quantitative and qualitative research, multiple aspects were observed. A key aspect of the research focuses on the requirements of the QBLM Platform, including its role in facilitating learning and managing learning experiences. Additionally, the study investigated the operation and configuration of the platform, ensuring that it can be effectively implemented and maintained.
The research identified a demand for online learning, tracking learning progress, and integrating learning resources. The findings revealed that learners require a user-friendly interface. Experts emphasized these insights, highlighting the need for competency profiles, adaptive learning paths, and modular course management. In terms of use cases, the need for qualification tracking, certification management, and structured assessment processes was identified. Regarding supported technical modalities, the study results show a preference for desktops and tablets, while mobile compatibility is important. Web applications and integration with existing IT infrastructures were also considered essential. Functional requirements identified by survey participants and experts include the integration of learning resources, progress tracking, collaborative tools, and adaptive assessments. As non-functional requirements, usability, performance, and data security were rated as the most critical issues.
In regards to the configuration of a QBLM Platform, the study explored the structural and configurational needs. Software components identified as essential include competency management modules, learning analytics, and intuitive administrative dashboards. Furthermore, the demand for notification systems, discussion forums, and external tool integration was formulated. Experts pointed out the importance of a microservices-based architecture to ensure modularity and scalability. The need for a balance between standardization and individualization was observed. Survey participants emphasized standardizing security, authentication, and data management while the UI is customizable. Experts supported this, stressing standardization for competency frameworks, customizable dashboards, and personalized learning paths. Hosting preferences varied throughout the study. While survey participants showed a strong preference for on-premise solutions, experts tended towards cloud solutions. In consequence, a flexible approach is required to serve both demands.
In contrast to the learning platforms investigated in Section 2.2, the QBLM Platform offers an individualized feature set that can be tailored to specific organizational needs. Furthermore, the integration of QBL enables the use of standardized and comparable CQs, which enables the comparability of learner qualifications across organizations. However, rather than replacing existing systems, the QBLM Platform should be designed to integrate with them. It can serve as a central hub that connects diverse learning platforms, incorporates their data, and facilitates the comparison of their learning content.
Finally, there was strong consensus supporting the implementation of the QBLM Platform. However, obstacles and potential issues are observed and need further discussion.
The remaining challenges include balancing standardization and individualization, defining a hosting strategy, integrating existing systems, enabling adaptive learning and personalization, and ensuring usability. Therefore, the observations have to be translated into concrete requirements. This process includes the creation of a software architecture, a user interface design, and operational concepts for the QBLM Platform.
In conclusion, all RQs have been addressed. The study has identified user demands for a QBLM Platform. Many findings are also applicable to other learning platforms.
However, the study had a limited scope, as the participants primarily represented a specific target group of adult learners. The correlation of the data suggests that the findings are valid for the assumed purpose of the QBLM Platform. Therefore, it remains uncertain whether the sample size is fully robust for all questions on a generalized population level. Despite this, the results should be considered valuable for guiding further development. While further investigation is needed to ensure the completeness of the captured requirements, the results provide a foundation for planning and designing the QBLM Platform. To validate the results, a prototypical implementation is necessary to test core functionalities, usability, and system integration. Additionally, prototypical implementations in real-world scenarios will provide further insights, ensuring the QBLM Platform meets actual user needs.

Author Contributions

Conceptualization, S.W.; methodology, S.W.; software, S.W.; validation, S.W. and J.R.; formal analysis, S.W.; investigation, S.W. and J.R.; resources, M.H.; data curation, S.W.; writing—original draft preparation, S.W.; writing—review and editing, S.W., J.R., A.V. and M.H.; visualization, S.W.; supervision, M.H.; project administration, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to Ethics Compliance Checklist According to DGPs Guidelines, the document bases on the official approach of the faculty psychology of the FernUniversität in Hagen (https://www.fernuni-hagen.de/psychologie/fakultaet/gremien/ethikkommission.shtml). The document indicates, that no further approval would be required.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data generated and analyzed during this study are not publicly available, but are available from the corresponding author upon reasonable request.

Acknowledgments

The authors gratefully acknowledge all interviewees for sharing their expertise during the interviews. We also thank all participants in the quantitative survey for their time and valuable contributions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CATCourse Authoring Tool
CBLCompetence-Based Learning
CMCompetence Manager
CPMCompetence Profile Manager
CQCompetency Qualifications
DSTMDidactical Structural Template Manager
EUEuropean Union
EQFEuropean Qualifications Framework
HEIHigher Educational Institute
KM-EPKnowledge Management Ecosystem Portal
LMSLearning Management System
LOLearning Object
LRSLearning Record Store
LTILearning Tools Interoperability
PAGELPsychologische Arbeitsgestaltung erleben (English: experiencing psychological work design)
QBLQualification-Based Learning
QBLMQualifications-Based Learning Model
RQResearch Question
VRVirtual Reality

Appendix A. Quantitative Research Questionnaire

This section presents an overview of the questionnaire introduced in Section 3.3. Each subsection begins with a description of the respective content, followed by a table that outlines the questions and their corresponding types. In addition to the English version presented, a corresponding German version of the questionnaire was made available to participants.

Appendix A.1. Structure of the Questionnaire Part 1—Introduction

The questionnaire begins by outlining the objectives and contextual background of the study, framing the concept of QBL as an approach that is the baseline of the QBLM Platform. Additionally, the legally required statements and a statement about gender neutrality are provided. This introduction sets the stage for the questionnaire by underlining the significance of both technical and social dimensions of learning to enhance overall education.
Table A1. Structure of the questionnaire 1.
Table A1. Structure of the questionnaire 1.
SectionQuestionType
1. Introduction
  • Overview of Qualifications-Based Learning (QBL) and the QBLM Platform.
  • Study objectives, confidentiality, and estimated completion time (15–20 min).
-
Informative text
-
Informative text

Appendix A.2. Structure of the Questionnaire Part 2—User Profiles

In the user profiles section, participants first answer a single-choice question asking, “What is your current employment status?” with options such as unemployed, pupil, student, teacher, lecturer/professor, employee/worker, trainee, pensioner, and other. This is followed by a multiple-choice question, “Which user group(s) do you belong to?” for which respondents can select all that apply from options including authors, learners, operator/responsible/manager, system administrators, and editorial staff (data protection, legal and organizational issues), along with an option for Other. These questions categorize the respondents and provide a context for interpreting their views on the platform. It also provides an overview of the expected audience professions.
Table A2. Structure of the questionnaire 2.
Table A2. Structure of the questionnaire 2.
SectionQuestionType
2. User Profiles
  • What is your current employment status?
  • Which user group(s) do you belong to?
-
Single-Choice
-
Multiple-Choice

Appendix A.3. Structure of the Questionnaire Part 3—QBLM Platform Requirements

Next, the requirements section examines the specific expectations for a QBLM Platform. It begins with a Likert-scale question asking, “I consider the following functionality to be an essential expectation of the QBLM Platform,” where respondents rate the items learning (online), learning (in-person), creating learning content, creation of learning paths (modules/courses), manage qualifications, assign qualifications to learners, support for learners, view acquired qualifications, and user management using response options from strongly disagree to strongly agree, plus a no-response option. An open-ended follow-up question, “Are there any other features you expect from the QBLM Platform?” allows for additional input. Next, a multiple-choice question requires participants to select up to three relevant use cases—options include create and manage courses, monitor learning progress, assign competencies and qualifications, provide learning materials, support and supervise learners, the integration of external content and tools, participation in in-person events, participation in online learning modules, conduct examinations and tests, generate reports and analyses, and other. Another Likert-scale question then asks, “I would consider the following device (respectively technical modality) for using the QBLM Platform,” listing options such as mobile devices/smartphones, desktop/notebook, tablet, smartwatch, virtual reality (VR), video, on-site/in presence, audiobook, and print/a book. An additional open-ended question invites suggestions for any other devices or modalities respondents might consider. The section concludes with two multiple-choice questions: one on non-functional requirements—asking respondents to choose up to three from options like performance, availability, usability, data security and privacy, backup and recovery, costs (operating and usage), accessibility, regulatory compliance, scalability, and other—and another on functional requirements, where respondents select up to three essential functions from options such as create and manage courses, user and role management, qualification management, the integration of learning resources, conduct exams and assessments, analysis and reporting tools, collaborative features, the connection of external systems, and other.
Table A3. Structure of the questionnaire 3.
Table A3. Structure of the questionnaire 3.
SectionQuestionType
3. Requirements on a QBLM PlatformFunctionalities:
  • I consider the following functionality to be an essential expectation of the QBLM Platform.
- Likert Scale
  • Are there any other features you expect from the QBLM Platform?
- Open-Ended
Use Cases: 
  • Which of the following use cases are most relevant to you when using the QBLM Platform?
- Multiple-Choice (select up to 3)
Modalities and Devices: 
  • I would consider the following device (technical modality) for using the QBLM Platform.
- Likert Scale
  • Are there other devices (or modalities) you would consider for using the QBLM Platform?
- Open-Ended
Non-functional Requirements: 
  • What non-functional requirements are most important to you when using the QBLM Platform?
- Multiple-Choice (Select 1–3)
Functional Requirements: 
 
  • Which functional requirements are essential for you?
- Multiple-Choice (Select 1–3)

Appendix A.4. Structure of the Questionnaire Part 4—QBLM Platform Configuration

The configuration section focuses on how the QBLM Platform should be set up. It starts with a Likert-scale question: “It is important to me to be able to add and customize the following components to the QBLM Platform,” with items like chat functions, forum, educational games, user analytics, authorization system, notification systems, and integration for external tools, rated from strongly disagree to strongly agree with a no response option. An open-ended question follows, asking, “Are there any other components you would add to the QBLM Platform?” The section continues with two multiple-choice questions: one asking, “Which of the following areas should be standardized and not individualized?” with options such as user management, course authoring tool, security and privacy policies, data management (storage and processing), interfaces for external systems, authentication and authorization, and other; and another asking, “In which areas do you expect the possibility for personalization?” with options including user interface and design, user management (e.g., roles and permissions), course authoring tool, learning content and methods, analytics and reporting, the integration of external tools and resources, and other. Finally, a Likert-scale question examines the hosting environment by asking, “The QBLM Platform should be able to be deployed on the following IT infrastructure,” with response items such as the cloud (e.g., AWS, Azure, Google Cloud), an organization’s own data center, local servers (on-premise), Personal devices (e.g., PC, notebook, smartphone), virtual machines (VMs), and a hybrid solution (combination of cloud and on-premise), followed by an open-ended question for any additional suggestions or comments.
Table A4. Structure of the questionnaire 4.
Table A4. Structure of the questionnaire 4.
SectionQuestionType
4. Configuration, of a
QBLM Platform
Software Components:
  • It is important to me to be able to add and customize the following components to the QBLM Platform.
- Likert Scale
  • Are there any other components you would add to the QBLM Platform?
- Open-Ended
Standard Features: 
  • Which of the following areas should be standardized and not individualized?
- Multiple-Choice
Individualization: 
  • In which areas do you expect the possibility for personalization?
- Multiple-Choice
Hosting Environment: 
  • The QBLM Platform should be deployed on the following IT infrastructure.
- Likert Scale
 
  • Do you have any further suggestions or comments?
- Open-Ended

Appendix A.5. Structure of the Questionnaire Part 5—Sense and Obstacles of a QBLM Platform

In the sense and obstacles section, respondents first answer a single-choice question, “Do you find it valuable to develop a QBLM Platform to make qualification-based learning accessible to a broader audience?” with the options yes and no. For those who answer no, a conditional open-ended question prompts them to explain their reasoning. This section captures overall support for the platform, as well as concerns that may present obstacles to its implementation.
Table A5. Structure of the questionnaire 5.
Table A5. Structure of the questionnaire 5.
SectionQuestionType
5. Sense and obstacles
  • Do you find it valuable to develop a QBLM Platform?
  • If not, please explain why.
-
Single-Choice
-
Open-Ended

Appendix A.6. Structure of the Questionnaire Part 6—Demographic Questions

The final demography section collects essential background information through single-choice questions. Participants indicate “Which gender do you identify with?” by selecting female, male, or non-binary. They then answer, “What is your highest level of education?” choosing from options including no school leaving certificate, secondary school, intermediate school leaving certificate, university of applied sciences entrance qualification, general university entrance qualification, completed vocational training, technician, bachelor’s degree, master’s degree, doctorate (PhD), and other. The section concludes with the question, “How old are you?” offering age ranges from 0–13 years old up to 65 years old and older. These demographic questions facilitate a detailed analysis by linking participant characteristics to their responses on the QBLM Platform.
Table A6. Structure of the questionnaire 6.
Table A6. Structure of the questionnaire 6.
SectionQuestionType
  • Which gender do you identify with?
- Single-Choice
6. Demography
  • What is your highest level of education?
- Single-Choice
  • How old are you?
- Single-Choice

References

  1. European Ministers of Education. The Bologna Declaration of 19 June 1999. Available online: http://www.ehea.info/Upload/document/ministerial_declarations/1999_Bologna_Declaration_English_553028.pdf (accessed on 22 January 2023).
  2. European Commission: Directorate-General for Employment, Social Affairs and Inclusion. The European Qualifications Framework–Supporting Learning, Work and Cross-Border Mobility–10th Anniversary; Publications Office of the European Union: Luxembourg, 2018; Available online: https://data.europa.eu/doi/10.2767/385613 (accessed on 23 June 2025).
  3. European Commission. European Skills Agenda for Sustainable Competitiveness, Social Fairness and Resilience. Available online: https://ec.europa.eu/social/main.jsp?catId=1223&langId=en (accessed on 26 June 2025).
  4. OECD. OECD Skills Strategy 2019: Skills to Shape a Better Future; OECD Publishing: Paris, France, 2019. [Google Scholar] [CrossRef]
  5. OECD. Getting Skills Right: Future-Ready Adult Learning Systems, Getting Skills Right; OECD Publishing: Paris, France, 2019. [Google Scholar] [CrossRef]
  6. Koper, R.; Schoonenboom, J.; Manderveld, J.; Kluijfhout, E.; Arjona, M.; Griffiths, D.; Van Rosmalen, P. Updated Use Case Models and Underlying Vision Documents and Pedagogical Model Definitions. Educational Cybernetics: Reports. 2008. Available online: https://ub-ir.bolton.ac.uk/esploro/outputs/report/Updated-use-case-models-and-underlying/999905708841 (accessed on 7 July 2025).
  7. 1EdTech Consortium, Inc. 1EdTech Learning Design Information Model. Available online: https://www.imsglobal.org/learningdesign/ldv1p0/imsld_infov1p0.html (accessed on 17 July 2024).
  8. Then, M.; Wallenborn, B.; Fuchs, M.; Hemmje, M. Towards a Domain Model for Integrating Competence Frameworks into Learning Platforms. Formamente Int. Res. J. Digit. Future 2016, 3. [Google Scholar]
  9. Then, M.; Hoang, M.D.; Hemmje, M. A Moodle-Based Software Solution for Qualifications-Based Learning (QBL). Available online: https://ub-deposit.fernuni-hagen.de/servlets/MCRFileNodeServlet/mir_derivate_00001796/Then_QBL_Moodle_Plugin_2019.pdf (accessed on 7 July 2025).
  10. Wallenborn, B. Entwicklung Einer Innovativen Autorenumgebung; FernUniversität in Hagen: Hagen, Germany, 2018. [Google Scholar] [CrossRef]
  11. Then, M. Supporting Qualifications-Based Learning (QBL) in a Higher Education Institution’s IT-Infrastructure; FernUniversität in Hagen: Hagen, Germany, 2019. [Google Scholar] [CrossRef]
  12. Vu, B. A Taxonomy Management System Supporting Crowd-Based Taxonomy Generation, Evolution, and Management; FernUniversität in Hagen: Hagen, Germany, 2019. [Google Scholar] [CrossRef]
  13. Wangen, E.N. What Is a Software Platform & How Is It Different From a Product? Available online: https://blog.hubspot.com/marketing/software-platform (accessed on 30 October 2023).
  14. Gottesdiener, E. The Software Requirements Memory Jogger; Goal/QPC: Salem, NH, USA, 2005. [Google Scholar]
  15. Creswell, J.W.; Creswell, J.D. Research Design—Qualitative, Quantitative, and Mixed Methods Approaches, 5th ed.; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  16. Srbecky, R.; Winterhagen, M.; Wetzel, S.-A.; Ochsendorf, I.; Hedderoth, A.; Then, M.; Wallenborn, B.; Fischman, F.; Vu, B.; Fraas, W.; et al. Dynamic and Adaptive Playout of Competency-Based Learning Games Based on Data in Learners’ Competency Profile Considering Didactical Structural Templates. In Gamification; Deliyannis, I., Ed.; IntechOpen: Rijeka, Croatia, 2022; Chapter 9. [Google Scholar] [CrossRef]
  17. Gererstorfer, J. Kurs-Profi. Available online: https://kursprofi.com/lernplattform-vergleich/ (accessed on 14 March 2025).
  18. Tovote, A. Salesforce Trailhead. Available online: https://comselect.de/salesforce-trailhead/ (accessed on 1 November 2024).
  19. Udemy. About, Udemy. Available online: https://about.udemy.com/de/ (accessed on 9 February 2025).
  20. Schulze-Jägle, E. Digitale Weiterbildung mit Hilfe des Betriebsrats. Available online: https://www.checkpoint-elearning.de/corporate-elearning/udemy-fuehrt-integrated-skills-framework-ein (accessed on 9 February 2025).
  21. LinkedIn Learning. Empowering Careers. Propelling Companies. Available online: https://learning.linkedin.com/product-overview (accessed on 9 February 2025).
  22. Salesforce. Skill Up for the Agentforce Era with Trailhead. Available online: https://trailhead.salesforce.com/de (accessed on 9 February 2025).
  23. Sauro, J. 15 Common Rating Scales Explained. Available online: https://measuringu.com/rating-scales (accessed on 15 February 2025).
  24. FernUniversität Hagen. Umfrageverwaltung. Available online: https://umfrage.fernuni-hagen.de/ (accessed on 16 February 2025).
  25. Destatis (German Federal Statistical Office). Current Population. Population. Available online: https://www.destatis.de/EN/Themes/Society-Environment/Population/Current-Population/_node.html (accessed on 24 June 2025).
  26. Destatis (German Federal Statistical Office). Educational Attainment of the Population in Germany. Education Level. 2025. Available online: https://www.destatis.de/EN/Themes/Society-Environment/Education-Research-Culture/Educational-Level/Tables/educational-attainment-population-germany.html (accessed on 24 June 2025).
  27. Destatis (German Federal Statistical Office). Population: Germany, Reference Date, Age. 31 December 2024. Available online: https://www-genesis.destatis.de/datenbank/online/statistic/12411/table/12411-0005 (accessed on 24 June 2025).
  28. Cheshmberah, M. Projects portfolio determination based on key stakeholders’ expectations and requirements: Evidence from public university projects. J. Proj. Manag. 2020, 5, 139–150. [Google Scholar] [CrossRef]
Figure 1. General structure of the study.
Figure 1. General structure of the study.
Information 16 00594 g001
Figure 2. User stereotypes by profession.
Figure 2. User stereotypes by profession.
Information 16 00594 g002
Figure 3. Opinion on main functionalities of QBLM Platform.
Figure 3. Opinion on main functionalities of QBLM Platform.
Information 16 00594 g003
Figure 4. Relevance of technical modalities for the QBLM Platform.
Figure 4. Relevance of technical modalities for the QBLM Platform.
Information 16 00594 g004
Figure 5. Relevance of extensions for the QBLM Platform.
Figure 5. Relevance of extensions for the QBLM Platform.
Information 16 00594 g005
Figure 6. Overview of standardized and personalizable areas within the QBLM Platform.
Figure 6. Overview of standardized and personalizable areas within the QBLM Platform.
Information 16 00594 g006
Figure 7. Hosting required for the QBLM Platform.
Figure 7. Hosting required for the QBLM Platform.
Information 16 00594 g007
Figure 8. Opinion on QBLM Platform demand.
Figure 8. Opinion on QBLM Platform demand.
Information 16 00594 g008
Table 1. Feature comparison of Udemy, LinkedIn Learning, and Trailhead.
Table 1. Feature comparison of Udemy, LinkedIn Learning, and Trailhead.
FeatureUdemyUdemy for BusinessLinkedIn LearningTrailhead
Recognition of Learning ProgressX
Subscription Model for Unlimited AccessXX
Personalized Learning Paths
HR Integration/Profile SynchronizationXXX
Course ReviewsXX
Adaptive Content DeliveryX
Course Search Functionality
Bookmark/Save Learning ContentX
Visual Learning Progress Tracking
Certificate Issuance
Web-Based Access
Mobile App Support
Interactive Community Features
Gamification ElementsXXX
Table 2. User stereotype definition.
Table 2. User stereotype definition.
User StereotypeCorresponding Stakeholders
AuthorsContent creators, instructors, course designers, corporate trainers
LearnersStudents, professionals, hobbyists, corporate trainees
Operator/Responsible/ManagerInstitutional representatives, corporate learning managers, HR departments
System AdministratorsLMS administrators, platform administrators, IT managers
Editorial Staff (Data and Legal Compliance)Data protection officers, legal experts, accreditation bodies, compliance authorities
Table 3. Interview experts and their user stereotypes.
Table 3. Interview experts and their user stereotypes.
IdentifierExpertise & RoleContextUser Stereotype
E1Researcher in gaming and learning analytics and PhD candidate, specializing in data-driven approaches in educational technologiesHEIAuthor, learner
E2Researcher and product owner in the university system since 2006, with expertise in system developmentHEIOperator/responsible/manager, system administrator
E3Co-founder of the QBLM concept and software architect specializing in modular systemsHEIAuthor, system administrator
E4Continuing education specialist with eight years of experience in the financial services sectorEnterprisesOperator/responsible/manager
E5Pharmacist with experience in LMS and documentation systems, working in the pharmaceutical industry for more than 10 yearsEnterprisesSystem administrator, learner, editorial staff
E6Training manager with 15 years of experience in commercial educationEnterprisesLearner, operator/responsible/manager
E7Head of a university clinic with extensive expertise in medical educationHEIOperator/responsible/manager, learner, editorial staff
Table 4. Identified stakeholders for a QBLM Platform.
Table 4. Identified stakeholders for a QBLM Platform.
Stakeholder CategoryStakeholdersRole/Impact
Learners
  • Students
  • Professionals
  • Corporate trainees
  • Hobbyists
Primary users performing courses and acquiring qualifications
Content Creators
  • Course authors
  • Instructional designers
  • Corporate trainers
  • University professors
  • Independent instructors
Create and structure learning content, ensuring educational value
Institutional Authorities
  • HEIs
  • Corporate Learning
  • Development departments
Provide learning environments, enforce qualification standards
IT-Provider
  • System administrators
  • IT managers
  • System developers
  • Software architects
Maintain infrastructure, security, and integration
Regulatory and Compliance Authorities
  • Data protection officers
  • Legal experts
  • Industry certification organizations
Ensure compliance with data security, privacy, and accreditation standards
Technology and Infrastructure Providers
  • Cloud service providers
  • Hosting providers
  • Developers of external tools
  • Game developers
Support hosting, infrastructure, and external tool integration
Research and Analytics Experts
  • Educational technology researchers
  • Gaming analytics professionals
  • Learning analytics professionals
  • Data scientists
Analyse user data and improve platform learning strategies
Enterprise Decision-Makers
  • Executives
  • Human Resource professionals
Define corporate learning strategies, fund implementation and development
End-User Support
  • Technical support teams
  • Education Specialist
Manage user support, discussions, and advice on learning paths
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wetzel, S.; Roeder, J.; Vogler, A.; Hemmje, M. Requirement Analysis for a Qualifications-Based Learning Model Platform Using Quantitative and Qualitative Methods. Information 2025, 16, 594. https://doi.org/10.3390/info16070594

AMA Style

Wetzel S, Roeder J, Vogler A, Hemmje M. Requirement Analysis for a Qualifications-Based Learning Model Platform Using Quantitative and Qualitative Methods. Information. 2025; 16(7):594. https://doi.org/10.3390/info16070594

Chicago/Turabian Style

Wetzel, Simon, Jennifer Roeder, Adrian Vogler, and Matthias Hemmje. 2025. "Requirement Analysis for a Qualifications-Based Learning Model Platform Using Quantitative and Qualitative Methods" Information 16, no. 7: 594. https://doi.org/10.3390/info16070594

APA Style

Wetzel, S., Roeder, J., Vogler, A., & Hemmje, M. (2025). Requirement Analysis for a Qualifications-Based Learning Model Platform Using Quantitative and Qualitative Methods. Information, 16(7), 594. https://doi.org/10.3390/info16070594

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop