1. Introduction
A pivotal role of universities is to connect with society
Ministerio de Ciencia, Tecnología e Innovación de Colombia (
2022). Through technology transfer, universities can serve as engines of social and economic transformation by converting research outcomes into products, processes, and services that benefit the industry, the public sector, and the broader community. This transformation bridges academia’s innovative potential with societal needs, positioning universities as key agents for sustainable development and helping them maintain their leadership in innovation-based competitive environments
Yuan et al. (
2016). Remarkably, technology transfer is inherently complex, as inventions produced in academic settings often do not automatically mature into commercially viable products
Rueda et al. (
2020) due to the fact that many universities lack a comprehensive roadmap that integrates the technological, business, and market elements necessary to transform research into marketable products
Bazan (
2019).
The technology transfer issue has been addressed from various perspectives, including product transfer
Arenas and González (
2018);
El-Ferik and Al-Naser (
2021);
Landi and Wei (
2018), maturity measurement
Bruno et al. (
2020);
Lavoie and Daim (
2017), and innovation development methodologies
Martin (
2023);
Rutitis and Volkova (
2021);
Varl et al. (
2022). In particular,
Arenas and González (
2018);
El-Ferik and Al-Naser (
2021);
Landi and Wei (
2018) reviewed collaborative frameworks among stakeholders and evaluated the benefits, impacts, challenges, and strategies for the successful commercialization of products generated by an academy.
Bruno et al. (
2020);
Lavoie and Daim (
2017) introduced metrics and models to assess technological, market, business, and manufacturing maturity, as well as various aspects that affect the development of a marketable product.
Martin (
2023);
Rutitis and Volkova (
2021);
Varl et al. (
2022) proposed scalable, user-centric models and agile approaches to create minimum viable products enabled for early validation. Despite the contributions made by the aforementioned research, none of these studies integrate continuous maturity measurement with a systematic, step-by-step roadmap for guiding and scaling academic inventions and prototypes toward development, maturity, and successful transfer to the productive sector.
This paper proposes a novel approach, including a Framework and a Method, for developing and maturing ICT products to facilitate the transfer of technology and prototypes from academic inventions to the productive and social sectors. The novelty of our approach lies in the explicit coupling of iterative development with maturity gates, the treatment of setbacks, and a tool-chain mindset for academia-to-industry transfer. In this regard, the Framework outlines phases of an innovation process, while emphasizing continuous market engagement and maturity assessment. The Method demonstrates how academia can conduct these phases to realize an invention by combining Scrum and Technology Readiness Levels (TRLs), enabling timely adjustments and guiding the product toward the desired maturity level. The approach was evaluated by Colombian innovation experts using Content Validity Indices (CVIs) and Content Analysis. From a holistic perspective, according to the CVI assessment, the experts considered the Framework and Method to be relevant, suggesting the need to clarify certain concepts to achieve complete consensus among experts. In turn, the Content Analysis corroborated the Framework’s relevance and Method’s clarity, while providing practical recommendations for improving the tool’s presentation and comprehensibility. The results show our approach as a promising solution to guide academic discoveries from initial concept through maturity and future effective transfer to the productive sector.
The key contributions performed by this paper are
A framework outlining the phases that an ICT invention must traverse to evolve into an innovation.
A method that combines Scrum and TRL to develop and mature academic ICT products for commercialization.
A CVI and Content Analysis-based evaluation of the proposed approach conducted by Colombian innovation experts.
The remainder of this paper is organized as follows:
Section 2 reviews the related work.
Section 3 introduces the Framework and its components for the development and maturity of ICT products.
Section 4 outlines the Method for developing and maturing ICT products.
Section 5 evaluates the approach and discusses the main findings.
Section 6 presents conclusions and outlines future work.
2. Related Work
The related work was collected by using a Systematic Literature Review, in which a structured search was conducted in Scopus and Web of Science for articles published between 2000 and 2024. The objective was to answer the following research question: What methods, methodologies, frameworks, guidelines, or techniques have been applied to enable the development, maturation, and transfer of research results to the productive or social environment? The search terms included “technology transfer,” “maturity assessment,” “maturity model,” “university-industry transfer,” “TRL,” “technological development,” “academic inventions,” and “development methodologies.” The inclusion criteria were peer-reviewed articles published in English or Spanish that addressed technological development, product maturity, academic inventions, or technology transfer. Titles and abstracts were examined, and the full texts of eligible studies were retrieved. The following paragraphs present the most relevant articles identified through this process.
Du et al. (
2012) introduce a comprehensive Product Maturity Level (PML) model to analyze high-tech products. This model combines TRL, MRL, and ARL approaches, establishing a general regularity in maturity and laying the foundation for advancing new product research, optimizing supply, and fostering continuous improvement.
De Fontaines et al. (
2013) propose a framework for scaling TRL, facilitating technology transfer between research and development projects through technological maturity. Their approach is based on usability tests that assess each TRL level in specific scenarios, illustrating the practical application of the products on a small scale and making it possible to anticipate their future use. By anticipating the context and conditions of use, the requirements of technology products are refined and validated.
de Carvalho et al. (
2021) present the Design Process for Innovative Products (dIP), a model for designing mechatronic products that integrates Design Thinking techniques, concurrent engineering, and agile methodologies with intellectual property management activities. The goal is to transform abstract ideas into operational products by seamlessly combining today’s three most influential product design philosophies and guiding intellectual asset management throughout the product lifecycle. The dIP model delineates a lifecycle into three primary and six design phases. These are implemented as tasks within a workflow incorporating five intellectual property management activities. At the core of the process are the Initial Requirements and Development Cycle phases. These stages apply Design Thinking principles to stimulate creativity and concurrent engineering with agile methodologies to address emerging requirements and shorten development timelines. Throughout the dIP, intellectual asset management activities accompany the evolution of prototypes and their components until the final product is reached. It is important to note that this model focuses on agile methods for innovation development without addressing technology transfer, i.e., it assumes that the same organization is responsible for researching, developing, and commercializing the innovation.
El-Ferik and Al-Naser (
2021) introduce an approach that includes an overview of the key aspects of trilateral collaboration between the academic research institution, the technology provider, and the end user, which enables analyzing the associated benefits, impacts, and challenges of technological transfer. The approach categorizes various cases and scenarios of trilateral collaboration according to the product’s technological maturity when incorporating a collaboration partner. Furthermore, it allows for a more precise definition of the project’s scope and requirements, facilitating the satisfaction of market needs and the commercialization of the results. Alexandra
Martin (
2023) proposes a Scrum-based agile framework tailored to the academic research environment. This approach divides the project plan into independent work packages to manage deliverables in discrete cycles. The framework is structured into three phases, each defined by the nature of the work and the time required to achieve its objectives. This phased approach mitigates risks and facilitates a staggered execution that adheres to the original plan. This proposal is intended as an alternative tool for managing technology transfer projects, applicable both in technology transfer offices and the biotechnology industry, to meet the stipulated deliverables and release the next tranche of funding. However, while the framework streamlines the fulfillment of objectives, it does not incorporate a gradual maturity assessment at each stage of the product lifecycle, which could limit the comprehensive development of the innovation.
Rutitis and Volkova (
2021) present a conceptual framework for creating innovative ICT products in startups. The framework describes the product development pathway through four stages, from problem discovery to business process management. It includes a range of innovation tools tailored to each stage that facilitate rapid business development changes and combine internal and external environments to generate innovation within the company through iterative validation of hypotheses with customers.
Kazakevich and Joiner (
2024) propose another framework based on the Minimum Viable Product (MVP) and agile methodologies that is intended to accelerate development and mitigate risks during product creation. The framework enables organizations to adopt agile practices without compromising engineering rigor thanks to comprehensive requirements management and the integration of tests and evaluations that validate customer and end-user needs. Remarkably, this framework includes high-level integration testing in all phases. The two previous frameworks for product creation are designed for business or industrial organizations, leaving aside academia and the emergence of innovations from applied research and knowledge management. Note that since academia is a critical source of emerging innovations, it is essential to consider the influence of research and knowledge on the transfer and innovation process.
Numprasertchai and Igel (
2004) present a platform that analyzes the role of knowledge management in improving and sustaining research activities. The platform integrates knowledge management practices supported by internal development, collaborative networks, and information technologies, enabling an efficient and complete basis for starting new product and service development. Remarkably, this platform helps university research groups to optimize their projects in terms of time, cost, quality, and commercialization, ultimately producing high-quality results that can be transformed into significant innovations.
Table 1 compares our approach to the frameworks and models identified in the literature review. In summary, the above-cited works propose frameworks, models, and methodologies for product development using agile methods to address the dynamic nature of innovation. Some works focus exclusively on maturity assessment, technology transfer specifics, or knowledge management in research centers. However, none of these studies have addressed the above-mentioned elements in a joint and interconnected manner. In this regard, our approach supports the advancement and maturity of technological products from academia to industry by integrating a continuous and standardized maturity assessment (TRL-based) and agile product development (Scrum-aided).
3. Framework for Developing and Maturing Information and Communication Technology Products
Figure 1 illustrates the technological and business-centered framework designed for the development and maturation of ICT products, comprising key stakeholders; Stage Components and Transversal Components that academic solutions must have to become market-ready innovations; the components’ relationships, maturity-validation gates to ensure research outcomes address societal needs and appeal to the productive sector; and a workflow to guide systematic progression. The difference between component types lies in their scope. Stage Components are activities tied to specific phases of the product lifecycle, from research to commercialization, whereas Transversal Components are elements intended to achieve continuous monitoring and iterative refinement across the product’s evolution.
The Stage Components, namely, the Fuzzy Front End (FFE) Stage Component, which covers research, ideation, and concept definition, the Development Stage Component, where technological, business, and intellectual property construction takes place, the Production Stage Component, which is focused on large-scale manufacturing, and the Commercialization Stage Component, which is responsible for market introduction and lifecycle management, support the product creation and maturation and operate linearly with feedback loops defined to guide progression from academic research to successful industrial commercialization.
The Transversal Components, namely, the Market Transversal Component, which connects and validates product-market fit continuously to engage stakeholders throughout the innovation journey, and the Maturity Assessment Transversal Component (relevant to the idea of the validation process and continuing until the innovation is released to the Market), which systematically measures and manages the innovation maturity, maintain an ongoing relationship with the Stage Components to oversee the product’s evolution. Furthermore, Transversal Components provide feedback to Stage Components by continuously analyzing product attributes, environmental factors, and performance metrics, enabling iterative refinements that respond to evolving market needs. For instance, when a concept lacks sufficient readiness, it must return to the FFE for further research. The Maturity Validation Gates represent checkpoints for the transition between Stages, which is managed by the Maturity Assessment Component. The Academic Institutions and Industry are the principal actors (stakeholders) of the proposed framework. Universities predominantly drive research, ideation, concept design, and early-stage development, whereas Industry assumes leadership in large-scale production and market commercialization. Dynamic relationships between actors and between Components create a coordinated, iterative pathway from invention to industrial application.
3.1. Stage Components
This section introduces the Stage Components: FFE, Development, Production, and Commercialization. These components support product creation and maturation and operate linearly, with feedback loops defined to guide progression from academic research to successful industrial commercialization.
3.1.1. Fuzzy Front End
FFE, the first Stage conducted by Universities or research centers, marks the beginning of the innovation process in academia and represents the initial phase of new product development characterized by high uncertainty
Gómez Vargas and Trujillo Arbeláez (
2010). This Stage encompasses activities ranging from opportunity identification, problem framing, and concept definition
Achiche et al. (
2013). In these activities, researchers explore technological and market prospects, generate and screen multiple ideas, and conduct targeted applied research to underpin proposed solutions. Here, they also design conceptual prototypes and perform early validation to assess feasibility and alignment with market needs. The FFE’s result is a refined product concept that represents a solution for the identified problem or opportunity, laying the groundwork for subsequent development and eventual commercialization.
FFE continuously and bidirectionally relates to other Components. The relationship with the Market Transversal Component represents an input through which relevant information is obtained from the commercial and social environment. This relation is essential for identifying opportunities and validating the product idea and concept, which must emerge from the convergence of technology research results and market momentum
Val Jauregi et al. (
2006). Conversely, the relationship with the Stage Components reflects the sequence in the innovation construction, so the FFE’s output (the approved concept) becomes the input for the Development Component. Notably, when setbacks occur during the process, FFE may receive input from other Stage Components. The setbacks allow returns to FFE whenever later stages identify gaps or technological shifts, preserving flexibility and aligning with evolving market demands.
3.1.2. Development
The Development Stage Component transforms concepts received from FFE into detailed technical and business specifications and constructs prototypes iteratively until the final product is built. Initially driven by Academic Institutions, industry involvement increases as prototypes mature and evolve. The development team employs iterative processes guided by agile practices and continuous maturity evaluations (e.g., TRL-based assessments) to advance development that aligns with project requirements and evolving market needs. Although each process must be tailored to the context and the specific needs of each organization or project
Bhuiyan (
2011), standard development methodologies and activities (requirements gathering, system design, iterative testing, and product validation) provide a structured pathway toward delivering a marketable ICT solution.
The Development Stage Component encompasses technology development, business development, and Intellectual Property (IP) management. Technology development includes designing and constructing a novel product. In this path, the development team is responsible for defining the technical specifications, creating plans, determining resources, assigning tasks, and building prototypes
Ulrich et al. (
2020). Business development involves conceiving the product’s business model and marketing strategies, as well as carrying out a financial evaluation
Bhuiyan (
2011). As the new product evolves, both sub-stages collaborate to design a product with high market potential
Ernst et al. (
2010). Therefore, IP mechanisms need to be defined and managed during development to protect innovations throughout the product lifecycle.
The Development Stage Component continuously and bidirectionally relates to other Components as follows. With the Market Transversal Component, the Development Stage Component engages in an iterative cycle that allows for assessing prototypes based on customer and stakeholder acceptance. The relationship with Maturity Assessment enables the enforcement of maturity metrics to gauge prototype readiness against defined requirements. The interaction between the Development Stage Component and FFE occurs when it provides the concept to be built, and the interaction with the Production Component occurs when it receives a fully functional product. When a setback occurs, the Development Stage Component may receive information from the Production or Commercialization Stage Components or serve as an input for the FFE Component. The setbacks’ interactions facilitate responsive adjustments and minimize development risks, ensuring alignment across the framework.
3.1.3. Production
The Production Stage Component is responsible for transforming the final prototype delivered by the Development Stage Component into a fully functional product that satisfies customer requirements and is ready for market launch. The Production Stage Component encompasses various tasks and phases related to transitioning from small-scale to large-scale production under industry leadership, which can differ based on the specific organization and product type
Huang et al. (
2005). For hardware products, some tasks involve acquiring raw materials, designing the product industrially, manufacturing components, assembling parts and final products, and delivering finished products to distribution centers. For software products, the tasks may include debugging and optimizing the software, verifying regulatory compliance and security, managing licensing and dependencies, ensuring interoperability, and packaging and deploying the product.
The Production Stage Component continuously and bidirectionally relates to the Stage and Transversal Components. The relationship with the Transversal Components represents the continuous evaluation during production that is intended to assess the final product’s market performance and the maturity of the production process. The relationship with the Stage Components reflects the evolution of the product. The Development Component delivers the final solution to the Production Stage Component, which, in turn, provides a product ready for market launch to the Commercialization Stage Component. When setbacks occur, the Production Stage Component may receive input from the Commercialization Stage Component or serve as input for the Development and FFE Stage Components. These setbacks permit agile adjustments and alignment across the framework.
3.1.4. Commercialization
The Commercialization Stage Component encompasses all the activities needed to bring a product to market. Industry leads this stage, advancing the product from initial adoption to widespread acceptance
Datta et al. (
2013). Several key tasks are essential for effectively introducing a product to the market, encompassing commercialization, marketing, and market monitoring. Pivotal commercialization activities are sales, licensing, pricing, and distribution. Marketing activities include product demonstrations, advertising, and branding
Datta et al. (
2013). Continuous market monitoring is performed via post-sales support, customer feedback collection, and competitor analysis
Ernst et al. (
2010). This monitoring activity is crucial for gathering feedback that enables the company to adapt to shifting market needs and maintain competitiveness.
The Commercialization Stage Component is continuously and bidirectionally related to various Components, including FFE, Development, Production, the Market, and Maturity Assessment. The relationship with the Transversal Components represents the evaluation of commercial maturity and the continuous assessment of market response. In turn, association with the Stage Components traces the product’s roadmap: the Production Stage Component delivers market-ready output, which serves as input for the Commercialization Stage Component. Adverse market performance or strategic changes can cause setbacks, enabling the Commercialization Stage Component to transmit insights or requirements back to the previous Stage Components. The aforementioned relationships aim to ensure that, via commercialization, the market introduction aligns with technological readiness and stakeholder expectations.
3.2. Transversal Components
This section introduces the Market and Maturity Assessment Transversal Components. These components enable the validation of product–market fit and the systematic measurement and management of innovation maturity, allowing for iterative product refinements to respond to evolving market needs.
3.2.1. Market
The Market Transversal Component ensures alignment between product development and the real-world conditions. Led by a cross-functional team of Academic and Industry actors, this Transversal Component supplies each Stage Component with critical insights on customer needs, competitor activities, and broader market dynamics. Therefore, in the Market Transversal Component, the organizations must carry out activities such as market condition monitoring, trend analysis, and market testing
Fakhreddin and Foroudi (
2022) to acquire, understand, and benefit from knowledge about customers, competitors, and markets
Ashrafi and Ravasan (
2018). Remarkably, organizations must utilize the insights gained from these activities to respond effectively to market changes and align their products with customer needs.
The Market Transversal Component relates to the FFE, Development, Production, and Commercialization Stage Components. The inputs of the Market Transversal Component represent the product at each stage, from concept to market-ready product. The component’s outputs represent the information gathered from the market, which is used to make informed decisions throughout the product development process.
3.2.2. Maturity Assessment
The Maturity Assessment Transversal Component is primarily responsible for continuously assessing product maturity from conception to commercialization, ensuring that product evolution aligns with strategic objectives and responds effectively to market needs. In this path, maturity is measured against predefined requirements using structured models composed of hierarchical levels and thematic dimensions, such as technological, commercial, and manufacturing criteria
Alcácer et al. (
2022). This assessment ensures that the product evolves effectively according to established guidelines set by a cross-disciplinary team of academic and industry experts, comprising assessment plans, periodic evaluations, and design actions to advance product maturity. Progression between maturity levels depends on meeting specific benchmarks of each level and is based on development-generated data, including test results and performance metrics
Olechowski et al. (
2020).
The Maturity Assessment Transversal Component continuously relates to the Development, Production, and Commercialization Stage Components, enabling the monitoring and guidance of product maturity throughout its evolution. In turn, the relationship with the FFE Stage Component primarily serves as a validation checkpoint, confirming whether a concept meets the criteria to advance to the next stage. By systematically evaluating progress against predefined benchmarks, the Maturity Assessment Transversal Component ensures that the product aligns with strategic objectives, maintains quality standards, and is ready for subsequent phases in the innovation process. Its inputs are primarily the concept or product to be evaluated, along with associated documentation, such as test results and performance metrics. Its outputs are detailed documentation containing the assessment results and the plans for reaching the next maturity level.
4. Method for the Development and Maturity of Information and Communications Technology Products
This section presents the method proposed to develop and mature academic ICT products for commercialization, which is based on agile development principles, continuous maturity evaluations using the TRL framework, and ongoing market engagement. The successful commercialization of products the come from academia requires development strategies that align with current market needs while considering business feasibility and strategic interests
Ashrafi and Ravasan (
2018). Therefore, keeping an ongoing connection with the market is fundamental. Continuous maturity assessment is necessary for informed decision-making, effective risk management, and improved process efficiency and quality, requiring the integration of validation mechanisms and persistent monitoring of the product’s maturity
Schumacher et al. (
2016). Agile methodologies thrive in high-uncertainty environments by fostering transparent stakeholder collaboration and rapid adaptation to dynamic market conditions and emerging technologies
Meso and Jain (
2006). Note, first, that orchestrating iterative prototyping, evaluation, and stakeholder feedback cycles ensures academic innovations remain responsive to evolving market demands and adhere to industrial standards. Second, the tools and methods suggested for the method’s realization can be found in the
Appendix A.
Figure 2 presents the proposed method via a component diagram that details the FFE and Development Stage Components as well as the Maturity Assessment Transversal Component, aiming to enable effective technology transfer from Academia to Industry. In this regard, the FFE Stage Component can be performed by integrating academic research with market opportunity analysis, ensuring that concept generation aligns with real-world needs and requirements. The Development Stage Component can be performed using the Agile Scrum methodology, combined with a continuous market orientation and maturity assessment based on TRL. Scrum is an agile approach that fosters responsiveness to evolving requirements, enhances stakeholder communication, and promotes cross-functional collaboration
Meso and Jain (
2006). The Maturity Assessment Transversal Component can be realized through the TRL framework, which offers a standardized and straightforward method for assessing and communicating innovation maturity
Olechowski et al. (
2020). The following sections outline how to conduct FEE, Development, and Maturity Assessment.
4.1. The FFE Process
Figure 3 presents the process proposed to perform the FEE Stage Component, including the Idea Generation, Research, and Concept Definition subcomponents. The Idea Generation subcomponent comprises the activities Identify Opportunities and Identify, Select, and Define Ideas. In the first activity, opportunities are identified by exploring current market demands and needs provided by the Market Transversal Component, which are supported by academic data. Outcome-Driven Innovation, Delphi
Okoli and Pawlowski (
2004), and Crowdsourcing
Afuah and Tucci (
2012) are approaches that can help realize this activity. The Outcome-Driven Innovation framework systematically uncovers and prioritizes customer needs. The Delphi method aggregates expert consensus through a series of iterative surveys. Crowdsourcing complements these approaches by collecting information through anonymous crowds to enrich the perspective on the problem. In the second activity, ideas are identified, selected, and defined through a creative process intended to generate a wide array of potential solutions. Based on the Market Transversal Component information and knowledge derived from research results, each idea is evaluated against market viability, technical feasibility, and strategic fit criteria. During this activity, the Theory of Inventive Problem Solving (TRIZ) approach
Aguilar-Zambrano et al. (
2012) enables the application of patent-derived patterns to generate inventive solutions, whereas Design Thinking
Liedtka and Ogilvie (
2011) guides empathy-driven exploration, concept prototyping, and iterative testing, incorporating techniques such as brainstorming
Paulus and Kenworthy (
2019). A Political, Economic, Social, Technological, Environmental, and Legal (PESTEL) analysis
Yüksel (
2012) can also be used in these activities to comprehend the diverse factors that could affect the project.
The Research subcomponent comprises two activities: Research Planning and Research Development. In the first activity, a detailed research plan is designed to ensure rigor and replicability by defining objectives, establishing the scope, selecting a research model, specifying data collection and analysis methods, scheduling, and budgeting. During the second activity, researchers generate new knowledge to understand and resolve the problems, opportunities, and ideas identified. Through iterative cycles, proposals are progressively refined with new perspectives, focusing and deepening the research in each iteration until the final idea and all related information are selected. The final output of these activities is a consolidated and validated body of knowledge, along with detailed documentation, on the chosen idea. Research activities benefit from platforms and protocols that standardize evidence gathering and synthesis to ensure transparency, reproducibility, and efficiency. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology
Sánchez-Serrano et al. (
2022) provides a structured checklist for planning and reporting systematic literature reviews and analyses. The EPPI-Reviewer tool utilizes machine learning modules to support data extraction, synthesis, and meta-analysis. For collaborative screening, tools like Rayyan and Covidence
Kellermeyer et al. (
2018) enable reviewers to independently evaluate studies, resolve discrepancies, and track inclusion criteria within shared, cloud-based environments.
The Concept Definition subcomponent comprises the activities Concept Designing and Concept Validation. During the first activity, the selected idea is elaborated into a detailed, coherent blueprint representing the optimal solution for the identified opportunity. This activity involves refining product features, user scenarios, and technical specifications to create a clear, actionable concept model. In the second activity, the viability of the product concept is evaluated using technical, market (e.g., end-users’ feedback), and business criteria based on insights from the Market Transversal Component. As earlier mentioned, the output of the FFE Stage Component is the validated product concept that interfaces with the Development Stage Component. Remarkably, these activities leverage user-centered and foresight tools, such as Customer Journey Mapping, Concept Boards, and Backcasting, to refine product proposals before starting the Development Stage Component. Customer Journey Mapping
D’Arco et al. (
2019) enables outlining user touchpoints, expectations, and pain points, clarifying key functionalities and experience flows. Concept Boards
Munk et al. (
2020) facilitate the articulation of the emotional “look and feel” by combining images, typography, and metaphors. Backcasting
Quist and Vergragt (
2006) exercises involve projecting desired future scenarios and then working backward to identify the necessary steps, testing solution feasibility against long-term objectives.
4.2. The Development Process
Figure 4 presents the Scrum-based process proposed to perform the Development Stage Component, including the Backlog, Sprint, and Test and Review subcomponents; we based this process on Scrum since it facilitates the product’s connection with the needs of a constantly evolving market by providing a framework for iterative and incremental development, which is functional under high uncertainty, emphasizes open stakeholder interactions for continuous feedback, and manages requirement changes at any phase
Meso and Jain (
2006). The aforementioned subcomponents are used to build product increments, resulting in product versions that represent the output of the Development Component.
The Backlog subcomponent comprises the Product Backlog Construction and Backlog Refinement activities. The first activity is to transform the product requirements (technical specifications, business goals, maturity criteria, and intellectual property considerations) defined in the FFE Stage Component into an initial Backlog that organizes into a prioritized list Epics, User Stories, and Tasks (Epics are large work items that encompass multiple User Stories, a User Story is a brief user-centric description of a feature to be developed within a single Sprint, and Tasks are the minor work units outlining the specific actions required to implement a User Story), covering features, bug fixes, research needs, quality tests, data preparation, and verification deliverables. In the second activity, which begins each Sprint, the team continuously refines the Backlog by updating, re-prioritizing, splitting, removing, or adding user stories in response to market inputs and maturity assessments. In this way, a refined product Backlog is obtained for each Sprint, providing a clear and up-to-date direction for product development. Diverse tools can support these activities. For example, the Roadmap
López-Franco (
2013) serves as a backlog construction tool that outlines product evolution over a defined timeframe. ProductPlan enables the visual design and breakdown into epics and user stories. The Priority Diagrams tool facilitates the objective ranking of epics and stories by plotting them based on relative business value and implementation effort. The Jira Product Backlog enables the capture, organization, and prioritization of all user stories and tasks. User Story Mapping
Patton (
2014) provides a structured visualization of development journeys, exposing gaps in functionality.
The Sprint subcomponent comprises the Sprint Planning and Sprint Execution activities. In the first activity, a cross-functional team conducts a planning meeting to define a Sprint agreed upon by all the entities involved in the innovation’s development, involving the corresponding Backlog (selecting user stories and tasks based on team capacity, resource availability, and project priorities), effort estimation, and duration (typically ranging from one to four weeks). In the second activity, the aforementioned team collaborates to carry out the planned Backlog, holding daily stand-ups to review progress, address impediments, and adjust priorities as needed. Remarkably, not every increment in a Sprint necessarily translates into a product version. Therefore, it is recommended that a limit be established on the number of Sprints required to release a product version, tailored to each project’s specific needs. The aforementioned activities can be aided by tools such as Azure Boards and the Business Model Canvas. Azure Boards offer Scrum-style boards, burn-down charts, and capacity metrics that allow development teams to monitor workflow phases in real time, identify bottlenecks, and adjust Sprints dynamically. The Business Model Canvas
Murray and Scuotto (
2016) enables the articulation and validation of the value proposition, customer segments, channels, key resources, cost structures, and revenue streams; by embedding the business perspective within Sprints, teams align technical deliverables with broader strategic objectives and ensure that every increment advances functionality and market viability.
The Test and Review subcomponent comprises the Quality Test and Sprint Review activities that close each development cycle. During the Quality Test, an internal quality evaluation is conducted to determine the product increment by assessing whether the Sprint requirements have been met. In this path, a Quality Report is generated based on the internal evaluation tests. For Quality Testing, User Testing sessions
Rubin and Chisnell (
2008) allow observing real users interacting with the product, meaning usability issues can be detected and the effectiveness of features can be validated. Furthermore, tools like qTest and TestRail facilitate test case management and provide unified environments for authoring, organizing, and executing test plans linked directly to user stories, systematically capturing results and metrics. In the second activity, in a meeting, the team presents the product increment to stakeholders (product owners, end users, and partner representatives) to discuss the goals achieved, the obstacles faced, the changes needed, and the subsequent priorities and gather actionable feedback. In the mentioned meeting, participants assess whether the increment is approved or non-approved; when approved, it becomes a product version that results in the official output of the Development Stage Component. During the Sprint Review, collaborative whiteboards like Miro and Mural can facilitate the collection of feedback, voting on ideas, and the planning of visual adjustments. In addition, tools such as Retrium and agile kits help structure discussions and record actionable improvement items, facilitating the establishment of improvement actions for the next development cycle.
4.3. Maturity Assessment Process
Figure 5 presents the TRL-based process proposed to perform the Maturity Assessment Transversal Component, including the Technology Readiness Assessment (TRA) and Technology Maturation Plan (TMP) subcomponents. We based this process on TRL, since it provides a standard and straightforward model that is widely adopted by the productive sector and research institutions, to monitor innovation and communicate the state of development with stakeholders. Other complementary models can be considered to enhance this process, such as the Business Readiness Level (BRL), which measures organizational and operational capabilities required for commercialization, the Commercial Readiness Level (CRL)
Khofiyah et al. (
2021), which assesses the degree of alignment with market demand and customer adoption potential, and the Manufacturing Readiness Level (MRL)
Office of the Secretary of Defense Manufacturing Technology Program Joint Service/Industry MRL Working Group (
2022), which focuses on the viability and maturity of manufacturing processes. Remarkably, selecting or adapting these models depends on product characteristics, institutional strategy, and regulatory context. Furthermore, by combining them, stakeholders can gain multidimensional insights into business, market, and manufacturing facets and may develop more precise maturation plans.
The TRA subcomponent involves the Assessment Planning and TRA Evaluation activities. During the first activity, the scope and objectives of the maturity evaluation are defined, the required resources are identified, and all relevant documentation (artifacts, test data, and reports) is organized for review. For this activity, the
TRA Best Practices Guide Kimmel et al. (
2020), developed by NASA, provides structured guidelines for applying the TRL framework and offers detailed protocols for TRL application, including scope definition, evidence requirements, and documentation standards. In the second activity, the testing team compares elements obtained from a maturity plan against the criteria of each TRL level to determine the appropriate level for the current product version. The maturity plan can be defined as proposed by the project to strengthen the agricultural sector in the department of CAUCA (Agroprototipos), including validation parameters across the nine TRL levels, structured into four dimensions: technical, intellectual property, market, and business. The technical dimension examines technological viability and performance, the intellectual property dimension assesses the protection of innovations, the market dimension evaluates user engagement, and the business dimension measures financial strength, the business model, and regulatory compliance. The result of the TRA evaluation activity is a comprehensive TRA Report that details the assigned TRL, supporting evidence, identified gaps, and potential risk areas. A valuable tool for this second activity is the Delphi method
Steinmüller (
2023), which gathers iterative expert judgments during evaluation until interdisciplinary consensus confirms TRL assignments and identifies potential risks. The Stage Gate Review approach
Cooper (
2011) is another helpful tool for formalizing level transitions, involving convening cross-functional reviews that assess performance metrics, cost projections, and risk profiles before approving progression.
The TMP focuses on constructing and executing a technology maturation plan. The plan identifies existing maturity gaps, establishes target maturity levels, and details the tasks, deliverables, and resources required for progression. It also formulates actions with a timeline and assigns responsibilities to achieve the defined targets. It is noteworthy that the U.S. Department of Energy’s TRA/TMP Process Guide
Office of the Executive Director for Systems Engineering and Architecture (
2023) provides a systematic methodology for planning TMP development tasks, milestones, and resource allocations. The plan execution also integrates TRA and TMP reporting, comprising key performance indicators and technology roadmaps, to ensure standardized communication between academic researchers and industry partners. TRA and TMP reports detail the current TRL positioning, outline the process for advancing the product’s maturity, and identify potential obstacles, associated risks, and regressions in maturity. When regressions or obstacles to progress occur, the reports must detail the situation identified and estimate the effort required to return to the desired maturity level.
As product versions progress, an increase in TRL levels can be expected. However, stagnation and setbacks in maturity can also occur. A Maturity Setback is a regression in maturity from one period to another
Szajnfarber (
2014) that can occur due to changes for new revisions and discoveries or in response to a development strategy
Szajnfarber (
2014); it does not represent a negative aspect of the development but refers to a natural and necessary part of the development process of complex systems that obey innovation’s cyclical and interconnected nature. Managing Maturity Setbacks is essential for innovation to mature correctly.
Figure 6 presents in an activity diagram of the process proposed to handle a Setback, involving (i) The Market Transversal Component or any Stage Component that originates the Setback. By initiating a formal notification of a change from one of these components, potential obstacles in the innovation are identified early. When a change requirement is rejected, the reasons are communicated to the requesting Transversal or Stage Component so that the change proposal can be refined. (ii) Once a change requirement is accepted, the Development Stage Component is responsible for managing the Setback by performing Backlog Refinement, Backlog Construction, and Sprint running, which could embrace adding or re-prioritizing user stories and tasks. (iii) The Maturity Assessment Component that identifies the Setback via the above-mentioned TRA and TMP reports. It is essential to highlight, first, that the Development and Maturity Assessment teams must work collaboratively to validate technical feasibility and long-term maturity benefits before advancing in any product change. Second, the Maturity Assessment Transversal Component prepares a change report, trying to forecast potential maturity setbacks.
As an illustrative example, let us consider a prototype for detecting coffee diseases, developed in collaboration between a Colombian University, an IT company focused on such a sector, and social organizations of producers who had identified the need. After initial field tests and agreements of interest by companies and communities, the prototype reached TRL 5 through Sprints and Validations. Market feedback gathered during this phase revealed a demand for a new feature: automatic report generation for integration with agricultural management platforms. The Transversal Market Component formalized the requirement and, after a preliminary feasibility assessment coordinated between the Development and Maturity Assessment teams, its implementation was approved. In the following Sprint, the corresponding user story was incorporated, and in the next two Sprints, the feature was developed and reviewed. However, the Maturity Assessment Component, which had already identified the risk of Setback, applied the predefined criteria and concluded that the functionality had not yet achieved the reliability required for field use. Consequently, the product underwent a formal regression to TRL 4. Corrective Sprints were then planned to strengthen the function and integrate interoperability. Once the improvements were made and the relevant field tests were passed, the assessment by the Maturity Assessment Component recorded a recovery to TRL 5. This case illustrates in short how Setbacks induced by market feedback are necessary, expected, and manageable within the maturity cycle and can be resolved through iteration, joint validation, and evidence-based maturity decisions.
5. Evaluation
The evaluation of the proposed approach for developing and maturing ICT products aims to validate its relevance and practicability in specific contexts and identify opportunities for improvement. In particular, the evaluation seeks to address the following questions: (i) Are the elements included in the tool relevant to the innovation process, particularly within academic environments? (ii) Are the elements of the tool comprehensible to the intended target audience? (iii) Are there components that should be added or removed? To address these questions, we conducted a study based on expert judgment that involved a questionnaire structured into four domains, each comprising items in the form of statements and an open-ended question per item intended to allow experts to provide qualitative comments and suggestions.
Experts assessed each item using a four-point Likert scale, following the methodology proposed by
Lynn (
1986). The scale was defined as follows:
the item is not relevant/adequate/clear,
the item is somewhat relevant/adequate/clear,
the item is quite relevant/adequate/clear, and
the item is highly relevant/adequate/clear. Remarkably, the assessment questionnaire was preliminarily validated by two researchers external to our investigation, who provided feedback on the clarity and format, resulting in refinements to item phrasing and instructions. The structure of the validation questionnaire is summarized in
Table 2, while
Table 3 details the items by domains.
By following Yusoff’s recommendations
Yusoff (
2019), we assembled a panel of seven professionals to evaluate the content validity of the proposed approach; remarkably, according to
Polit and Beck (
2006), panels of 5–10 experts are adequate for preliminary studies. We selected the experts through purposive sampling, seeking diversity in profiles and roles to cover the heterogeneity of the phenomenon and ensure a trading-off perspective. The panel included representatives from Industry (three experts with a combined experience of over 45 years) and Academia (four experts with a combined experience of over 60 years). The experts were selected based on criteria such as a minimum of five years of relevant professional experience in R&D, technology transfer, or ICT product development; documented participation in transfer projects or publications in the field; and availability to participate in anonymous evaluations and provide qualitative feedback.
After initial contact, the questionnaire was distributed via email and answered using a Google Form. Upon receipt of the responses, the data were checked for completeness, and any minor inconsistencies were clarified through follow-up communication with each expert. The validation process was initially designed to allow for up to two rounds of assessment (initial round + re-assessment round for substantially modified items), with the second round occurring if any item fell below the pre-specified threshold and had to be re-assessed.
5.1. Metrics
Our quantitative evaluation is based on CVI (widely endorsed in the literature for its straightforward computation and interpretive clarity), which is an inter-rater agreement metric that assesses content validity by having a panel of experts rate the relevance of each item, and then computing agreement percentages based on their judgments,
Gerst et al. (
2025);
Polit et al. (
2007). For CVI computation, Likert-scale scores were dichotomized: values of 3 and 4 were coded as “relevant” (value = 1), while values of 1 and 2 were coded as “not relevant” (value = 0).
We measured Item-level CVI and Scale-level CVI
Polit and Beck (
2006). Item-level CVI measures the proportion of experts who consider a specific item to be relevant, which is given by Equation (
1), where
C is the number of experts rating the item as “quite” or “highly” relevant, and
N is the total number of experts. An
is considered acceptable; items with values marginally below this threshold are slated for revision, whereas items with very low values must be removed
Hakim et al. (
2025). To complement Item-level CVI, we calculated the modified kappa (
)
Polit and Beck (
2006), which measures the portion of agreement that exceeds what would be expected by chance per item.
is given by Equation (
2), where
is the probability of chance agreement and is given by Equation (
3). A value of
is interpreted as excellent agreement.
The Scale-level CVI assesses the content validity of the tool as a whole, reflecting the overall representativeness of its items
Yusoff (
2019). Scale-level CVI/Ave, the arithmetic mean of all Scale-level CVI values, is given by Equation (
4), where
A is the number of items. A value of Scale-level CVI/Ave ≥ 0.90 is considered acceptable. Scale-level CVI/UA, the proportion of items that all experts unanimously (Universal Agreement) rate as relevant, is given by Equation (
5), where
is the count of items with unanimous expert agreement. A Scale-level CVI/UA ≥ 0.80 is regarded as excellent, although achieving this level becomes more challenging as the expert panel grows
Polit and Beck (
2006).
We used Content Analysis (
Bardin (
1991) and
Vaismoradi et al. (
2013)) to study the open-ended questions, including the following stages: (1) pre-analysis of expert comments to gain an initial overview; (2) coding of the observations to synthesize and identify relevant meanings; (3) thematic categorization to group related insights; (4) interpretation of the findings; and (5) triangulation with quantitative results. These stages allowed for a more comprehensive understanding of expert evaluations and supported more informed decision-making on the relevance, clarity, and improvement of each item, enhancing the depth and reliability of the analysis.
Next,
Section 5.3 presents the results of item- and scale-level validation for the various domains assessed, as well as the qualitative findings from the experts’ observations. The results are interpreted to identify the required thresholds, strengths, areas for improvement, and limitations of our approach.
5.2. Results and Analysis
The panel of seven experts, consisting of both academics and industry professionals, evaluated the instrument using a structured questionnaire containing 28 closed-ended items for CVI computation. All responses were reviewed for completeness and consistency before analysis.
Table 4 summarizes the results of the Item-level CVI and
for all items, and
Figure 7 shows them by domain. The results revealed that the Scale-level CVI/UA was 79% (unanimity in 22/28 items—narrowly below the desired threshold of 0.80). Complete agreement was observed in Domains 1 and 4, confirming their relevance for the instrument, while Domain 2 exhibited a perfect score in 6 of 10 items (60%), evidencing divergent expert views regarding this domain. The items without unanimity attained an Item-level CVI of 0.86, exceeding the minimum threshold of 0.83, indicating that only one of the seven experts rated those items as insufficiently relevant and confirming high item validity. In Domain 3, two items scored 0.86, while the remainder scored 1.00, indicating that most of them were unanimously judged as relevant. Overall, items with unanimous agreement yielded
. The remaining items produced
. At the domain level, the minimum kappa was obtained by Domain 2 and Domain 3, both with 0.85. The acceptability threshold is
; therefore, all items exceed the recommended cut-off, indicating substantial to excellent agreement. These kappa results corroborated the Item-level CVI findings and strengthen the conclusion that the instrument items were judged as relevant.
Figure 8 compares the indices results at the scale level for each domain. The Scale-level CVI/Ave was 0.97, exceeding the required threshold of 0.90. Domains 1 and 4 attained perfect Scale-level CVI/UA and Scale-level CVI/Ave equal to 1, underscoring the solidity of the approach’s structure, presentation, and content. In contrast, Domain 2 (Framework Evaluation) and Domain 3 (Method Evaluation) attained low Scale-level CVI/UA values of 0.60 and 0.75, respectively, despite acceptable Scale-level CVI/Ave values of 0.94 and 0.96 (still above the defined threshold), confirming the relevance of the domain and indicating that some aspects of the framework could benefit from refinement. In each case, two evaluators considered different items as somewhat relevant, which reduced the percentage of universal agreement; the discrepancies appear to be primarily attributable to individual interpretation, stemming from limited clarity, insufficient context, or particular perspectives, rather than from substantive deficiencies in the items themselves. These findings suggest that refining item instructions and integrating illustrative examples could bolster methodological precision and foster stronger consensus.
To assess the robustness of the content validity results, a sensitivity analysis was performed by recalculating the indices after excluding (separately) a very lenient evaluator and a relatively strict evaluator.
Table 5 shows the distribution of the ratings assigned by the seven experts before dichotomizing the data, considering only those items that received at least one rating below 3. It can be seen that the lowest ratings are concentrated among experts 3 and 7, while item Q2 in Domain 2 has the lowest ratings overall. From these results,
Table 6 summarizes the variations observed in Item-level CVI, Scale-level CVI/Ave, and Scale-level CVI/UA. With the seven evaluators, the Scale-level CVI/Ave was 0.97 and the Scale-level CVI/UA was 0.79. The exclusion of the most lenient evaluator (E2) produced minimal changes, reflected in a slight reduction in the minimum Item-level CVI (0.83) that was still within the established threshold. In contrast, the exclusion of the strictest evaluator (E3) increased the overall values, reaching a Scale-level CVI/Ave of 0.98 and a Scale-level CVI/UA of 0.89. These results demonstrate the stability of validity estimates in exclusion scenarios and confirm the instrument’s content validity.
We also conducted a content analysis of the open-ended questionnaire responses. In the pre-analysis stage, the comments were tabulated and organized by expert and item. During the coding and thematic categorization stage, each observation was synthesized and classified into three main categories: Format, Clarity, and Depth. In the interpretation stage, comments were grouped by item and category and the experts’ agreement on each observation was quantified. Based on this analysis, specific decisions were made regarding each item. Observations that were deemed pertinent and actionable were incorporated to improve the approach as a whole, while those considered beyond the scope of our investigation or arising from misunderstandings or lack of contextual knowledge on the part of the expert were not adopted. In the triangulation stage, the qualitative insights were integrated with the Item-level CVI scores, enriching the evaluation and offering a more contextualized understanding.
Table 7 summarizes the analysis for selected items, especially those that did not achieve an Item-level CVI = 1.00 or those that received significant qualitative feedback; of note, a second formal round was not conducted, as all items exceeded the pre-specified validity threshold (≥0.83). Analysis of the open-ended feedback provided more profound insights into the sources of disagreement and guided targeted enhancements; notably, communications were maintained with some experts who had expressed reservations or provided qualitative comments to clarify doubts about the wording of certain items and validated minor changes to improve presentation and clarity. Suggestions included refining the framework diagram to more intuitively depict the relationships between steps and adding concrete examples and explanatory annotations to clarify the essential concepts of the method. The experts also proposed stylistic improvements (simplified writing, consistent terminology, and optimized formatting) to increase readability and engagement. Each recommendation was collaboratively reviewed, and those deemed appropriate were incorporated into the approach, thereby aligning conceptual content and presentation with expert expectations.
The results described above came about because the approach’s phased structure, iterative logic, and alignment with TRL standards provide researchers with a practical tool to identify maturity gaps and refocus their efforts toward achieving commercial and social value. Consequently, the proposed approach not only supports the transformation of research results into viable products for transfer but also contributes to fostering a more structured and innovation-driven culture within universities.
5.3. Final Remarks
The previous literature on technological product development and knowledge transfer has addressed specific dimensions in isolation, such as agile project management, technology maturity assessment, or the integration of design methodologies, including Design Thinking and Concurrent Engineering. However, these approaches tend to emphasize a single aspect of the innovation process—whether that be early-stage validation, business model generation, or the transition to manufacturing—without providing a coherent framework for the progressive maturation of products within academic environments. For instance, MVP-based models and startup-oriented frameworks have demonstrated effectiveness in dynamic business contexts; however, they fall short in university settings, where timelines, resources, and research logics differ significantly.
Regarding maturity evaluation, works that rely exclusively on scales such as TRL or MRL offer rigorous measurement but lack adaptive mechanisms for iterative feedback and continuous adjustment. This limitation restricts their applicability in environments where setbacks and reconfigurations are common and recurring. Similarly, university–industry collaboration frameworks highlight governance and stakeholder relations but overlook the necessity of integrating maturity measurement as a central axis of the innovation process.
In contrast, the approach presented in this paper provides distinctive contributions. First, it introduces a holistic framework that connects traditionally fragmented phases—from ideation to commercialization—through transversal components of market engagement and maturity assessment, ensuring coherence and continuity. Second, it proposes a hybrid method that combines Scrum with TRL, preserving the flexibility of agile cycles while maintaining traceability across technological, business, and commercial maturity levels. Third, it explicitly incorporates the management of maturity setbacks, which represents a novel contribution compared to the existing literature, as most prior models assume a linear and unidirectional progression of innovation and fail to recognize the iterative and non-linear nature of academic product development. In doing so, the proposed approach overcomes the limitations of fragmented frameworks and offers a structured pathway for strengthening technology transfer from universities to society.
6. Conclusions and Future Work
In this paper, we introduced an integrated approach, comprising a Framework and a Method, to guide the development and progressive maturation of ICT products in academia and facilitate their transfer to productive and social sectors. The Framework describes the phases of innovation, from ideation to commercialization, detailing the relationships between phases, routine tasks, and the roles of key stakeholders. The Method focuses on the stages specific to academia (FFE and Development), combining Scrum principles with continuous maturity assessment based on TRL levels. In this way, it offers a roadmap that aligns research progress with market expectations, driving the conversion of prototypes into transferable solutions. A content validity evaluation with a set of innovation experts was performed. The Scale-level CVI/Ave of 0.97 substantially exceeds the 0.90 benchmark, and 22 of the 28 items achieved a perfect Item-level CVI of 1.00, reflecting unanimous agreement on their relevance. The remaining six items each attained an Item-level CVI of 0.86, which was still above the 0.83 threshold, indicating that only one of the seven experts rated them as insufficiently relevant. However, the overall Scale-level CVI/UA of 0.79 falls just below the 0.80 criterion, suggesting that a certain number of concepts might benefit from further clarification to secure complete expert agreement. In turn, the qualitative validation confirmed the relevance of the Framework’s components and the clarity of the Method. Overall, the results are promising and corroborate the relevance of maturating products for enabling technology transfer from academia to industry, the coherence of the approach’s organization, and the importance of its components. Furthermore, they suggest adjustments to the presentation to enhance practicability in different academic environments.
In future work, we intend to pilot the approach in various innovation projects across different universities to assess its practical performance under operational conditions. These pilots will enable the specification of aspects such as stakeholder engagement, communication strategies, and support mechanisms for technology transfer. Also, we plan to develop a digital platform to support and encourage the use of our approach; such a platform would provide researchers and institutions with a scalable tool for managing innovation and monitoring product maturity in dynamic academic ecosystems.
Author Contributions
Conceptualization, A.S.-H. and O.M.C.R.; Methodology, A.S.-H. and O.M.C.R.; Validation, W.R.M.; Formal analysis, A.S.-H. and O.M.C.R.; Investigation, A.S.-H. and O.M.C.R.; Resources, O.M.C.R. and W.R.M.; Writing—original draft, A.S.-H.; Writing—review & editing, A.S.-H., O.M.C.R. and W.R.M.; Visualization, A.S.-H.; Supervision, O.M.C.R. and W.R.M.; Project administration, O.M.C.R.; Funding acquisition, O.M.C.R. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Universidad del Cauca— through the Master’s scholarship of the student Angélica Serna-Herrera.
Institutional Review Board Statement
Ethical review and approval were not required for this study, as it consisted only of anonymized expert evaluations, with no personal or sensitive data collected and no interventions performed. The authors confirm that participants were informed about the study’s purpose and the use of their data, and provided verbal consent. The study complied with national data protection regulations (Law 1581 of 2012 and Law 1266 of 2008, Colombia) and was conducted in accordance with the principles of the Declaration of Helsinki.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
ICT | Information and Communication Technology |
IP | Intellectual Property |
FFE | Fuzzy Front End |
MVP | Minimum Viable Product |
TRL | Technology Readiness Level |
TMP | Technology Maturation Plan |
TRA | Technology Readiness Assessment |
CVI | Content Validity Index |
Appendix A. Tools and Platforms
Table A1.
Tools, web platforms, and documental resources mentioned in the manuscript. Each entry includes vendor, typical license/format, and purpose; URL and access date are provided for traceability.
Table A1.
Tools, web platforms, and documental resources mentioned in the manuscript. Each entry includes vendor, typical license/format, and purpose; URL and access date are provided for traceability.
Tool/Vendor (Location) | Version/License | Purpose/Notes (URL, accessed) |
---|
ProductPlan
(ProductPlan, Inc.; Denver, CO, USA) | Trial: Web subscription (SaaS) | Road-mapping and product-planning boards to design and communicate product roadmaps and timelines. URL: https://www.productplan.com. Accessed: 18 September 2025. |
Jira (Atlassian; Sydney, Australia) | Trial: Cloud/Server (subscription) | Issue tracking, sprint planning, and backlog management (Jira Software). URL: https://www.atlassian.com/software/jira. Accessed: 18 September 2025. |
Azure Boards (Microsoft; Redmond, WA, USA) | Trial: Cloud (part of Azure DevOps; subscription/free tiers) | Work-item tracking and project/board management integrated with Azure DevOps pipelines. URL: https://azure.microsoft.com/services/devops/boards/. Accessed: 18 September 2025. |
TestRail (Gurock Software GmbH; Leverkusen, Germany/U.S. office: Austin, TX, USA) | Trial: Cloud/On-prem | Test-case management and test execution reporting for QA teams. URL: https://www.gurock.com/testrail/. Accessed: 18 September 2025. |
qTest (Tricentis; Vienna, Austria/US offices) | Trial: Cloud/Enterprise | Test management platform to author, organize, and execute test plans and track results. URL: https://www.tricentis.com/products/qtest/. Accessed: 18 September 2025. |
Miro (Miro Ltd.; Amsterdam, The Netherlands/San Francisco, CA, USA) | Freemium. Typical license: Web app | Collaborative online whiteboard for remote co-design, ideation, and sprint planning. URL: https://miro.com. Accessed: 18 September 2025. |
Mural (MURAL; San Francisco, CA, USA) | Freemium: Web app | Collaborative digital workspace/whiteboard for workshops, brainstorming, and collecting stakeholder feedback. URL: https://www.mural.co. Accessed: 18 September 2025. |
Retrium (Retrium, Inc.; Silver Spring, MD, USA) | Trial: Web app/Subscription | Facilitation platform for retrospectives and structured team improvement workshops. URL: https://www.retrium.com. Accessed: 18 September 2025. |
What is Outcome-Driven Innovation? (Anthony W. Ulwick; Strategyn, LLC) | White paper (non-peer-reviewed) | White paper introducing the Outcome-Driven Innovation framework for defining and prioritizing customer outcomes; often referenced in practitioner literature on product strategy. URL: https://innovationroundtable.com/summit/wp-content/uploads/2014/05/Strategyn_what_is_Outcome_Driven_Innovation.pdf. Accessed: 18 September 2025. |
EPPI-Reviewer (EPPI Centre, UCL Social Research Institute; London, UK) | Trial: Versión 6.x (web app) | Advanced software for systematic reviews, maps, and evidence synthesis; web application used to manage screening, coding, and extraction in systematic reviews. URL: https://eppi.ioe.ac.uk/cms/About/AboutEPPIReviewer/tabid/2967/Default.aspx. Accessed: 18 September 2025. |
KTH Innovation Readiness Level (KTH Royal Institute of Technology) | Online resource | KTH’s Innovation Readiness Level (IRL) web resource describes a maturity/readiness approach and has an interactive tour. URL: https://kthinnovationreadinesslevel.com/take-a-tour/. Accessed: 18 September 2025. |
References
- Achiche, S., Appio, F. P., McAloone, T. C., & Minin, A. D. (2013). Fuzzy decision support for tools selection in the core front end activities of new product development. Research in Engineering Design, 24(1), 1–18. [Google Scholar] [CrossRef]
- Afuah, A., & Tucci, C. L. (2012). Crowdsourcing as a solution to distant search. Academy of Management Review, 37(3), 355–375. [Google Scholar] [CrossRef]
- Aguilar-Zambrano, J. A., Valencia, M. V., Martínez, M. F., Quiceno, C. A., & Sandoval, C. M. (2012). Uso de la Teoría de Solución de Problemas Inventivos (TRIZ) en el análisis de productos de apoyo a la movilidad para detectar oportunidades de innovación. Ingeniería y Competitividad, 14(1), 137–151. [Google Scholar] [CrossRef]
- Alcácer, V., Rodrigues, J., Carvalho, H., & Cruz-Machado, V. (2022). Industry 4.0 maturity follow-up inside an internal value chain: A case study. The International Journal of Advanced Manufacturing Technology, 119(7), 5035–5046. [Google Scholar] [CrossRef]
- Arenas, J. J., & González, D. (2018). Technology transfer models and elements in the university-industry collaboration. Administrative Sciences, 8(2), 19. [Google Scholar] [CrossRef]
- Ashrafi, A., & Ravasan, A. Z. (2018). How market orientation contributes to innovation and market performance: The roles of business analytics and flexible IT infrastructure. Journal of Business & Industrial Marketing, 33(7), 970–983. [Google Scholar] [CrossRef]
- Bardin, L. (1991). Análisis de contenido (1st ed.). Ediciones Akal. [Google Scholar]
- Bazan, C. (2019). “From lab bench to store shelves:” A translational research and development framework for linking university science and engineering research to commercial outcomes. Journal of Engineering and Technology Management—JET-M, 53, 1–18. [Google Scholar] [CrossRef]
- Bhuiyan, N. (2011). A Framework for successful new product development. Journal of Industrial Engineering and Management, 4(4), 746–770. [Google Scholar] [CrossRef]
- Bruno, I., Lobo, G., Covino, B. V., Donarelli, A., Marchetti, V., Panni, A. S., & Molinari, F. (2020, September 23–25). Technology readiness revisited: A proposal for extending the scope of impact assessment of European public services [Conference paper]. 13th International Conference on Theory and Practice of Electronic Governance (pp. 369–380), Athens, Greece. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85095978980&doi=10.1145%2f3428502.3428552&partnerID=40&md5=06bf8c0633371b6396fd3ec6f4d82279 (accessed on 30 August 2025). [CrossRef]
- Cooper, R. G. (2011). Winning at new products: Creating value through innovation (4th ed.). Basic Books. [Google Scholar]
- D’Arco, M., Lo Presti, L., Marino, V., & Resciniti, R. (2019). Embracing AI and Big Data in customer journey mapping: From literature review to a theoretical framework. Innovative Marketing, 15(4), 54–66. [Google Scholar] [CrossRef]
- Datta, A., Reed, R., & Jessup, L. (2013). Commercialization of innovations: An overarching framework and research agenda. American Journal of Business, 28(2), 147–191. [Google Scholar] [CrossRef]
- de Carvalho, R. A., da Hora, H., & Fernandes, R. (2021). A process for designing innovative mechatronic products. International Journal of Production Economics, 231, 107887. [Google Scholar] [CrossRef]
- De Fontaines, I., Lefeuve, D., Prudhomme, G., & Tollenaere, M. (2013, August 19–22). New key success factors for engineering technology transfer between research and development: Technology maturity and proof of usage [Conference paper]. DS 75-3: 19th International Conference on Engineering Design (ICED13) Design For Harmonies, Vol. 3: Design Organisation and Management (pp. 61–70), Seoul, Republic of Korea. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84897634293&partnerID=40&md5=e06b62d90c0181c7601f288af925ab58 (accessed on 30 August 2025).
- Du, G., Huang, Q., & Sun, L. (2012). Research of synergy product maturity based on maturity cycle. In Materials processing technology ii (Vol. 538, pp. 2813–2821). Trans Tech Publications Ltd. [Google Scholar] [CrossRef]
- El-Ferik, S., & Al-Naser, M. (2021). University industry collaboration: A promising trilateral co-innovation approach. IEEE Access, 9, 112761–112769. [Google Scholar] [CrossRef]
- Ernst, H., Hoyer, W. D., & Rübsaamen, C. (2010). Sales, marketing, and research-and-development cooperation across new product development stages: Implications for success. Journal of Marketing, 74(5), 80–92. [Google Scholar] [CrossRef]
- Fakhreddin, F., & Foroudi, P. (2022). The impact of market orientation on new product performance through product launch quality: A resource-based view. Cogent Business & Management, 9(1), 2108220. [Google Scholar] [CrossRef]
- Gerst, M. D., Dillard, M., & Loerzel, J. (2025). Methodological recommendations for content validation of community resilience indicators. Natural Hazards Review, 26(2), 04025010. [Google Scholar] [CrossRef]
- Gómez Vargas, V., & Trujillo Arbeláez, C. (2010). Co-branding como herramienta en la etapa de fuzzy front end (ffe) para la identificación de oportunidades y generación de un concepto de producto entre dos empresas antioqueñas [Master’s Thesis, Universidad Eafit, Medellin, Colombia]. [Google Scholar]
- Hakim, N. A. M. L., Pairan, M. R., & Zakaria, M. I. (2025). Step-by-step guide to calculating content validity index (CVI) for single constructs using excel. International Journal of Research and Innovation in Social Science, 9(3), 1717–1726. [Google Scholar] [CrossRef]
- Huang, G. Q., Zhang, X., & Liang, L. (2005). Towards integrated optimal configuration of platform products, manufacturing processes, and supply chains. Journal of Operations Management, 23(3), 267–290. [Google Scholar] [CrossRef]
- Kazakevich, B., & Joiner, K. (2024). Agile approach to accelerate product development using an MVP framework. Australian Journal of Multi-Disciplinary Engineering, 20(1), 1–12. [Google Scholar] [CrossRef]
- Kellermeyer, L., Harnke, B., & Knight, S. (2018). Covidence and Rayyan. Journal of the Medical Library Association, 106(4), 580–583. [Google Scholar] [CrossRef]
- Khofiyah, N. A., Sutopo, W., Hisjam, M., & Ma’Aram, A. (2021, March 7–11). A framework of performance efficiency measurement in technology transfer office (TTO) for acceleration of commercialization technology [Conference paper]. 11th Annual International Conference on Industrial Engineering and Operations Management (IEOM 2021) (pp. 2137–2148), Singapore. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85114217005&partnerID=40&md5=792197e21e0312996b3c322eb86b15a7 (accessed on 30 August 2025).
- Kimmel, W. M., Beauchamp, P. M., Frerking, M. A., Kline, T. R., Vassigh, K. K., Willard, D. E., Johnson, M. A., & Trenkle, T. G. (2020, June). Technology readiness assessment best practices guide (Special Publication No. SP-20205003605). National Aeronautics and Space Administration, Office of the Chief Technologist. Available online: https://ntrs.nasa.gov/citations/20205003605 (accessed on 30 August 2025).
- Landi, W., & Wei, Z. (2018, November 13–16). Basic modes and reform strategies of university-industry collaboration for made in China 2025 [Conference paper]. 2017 7th World Engineering Education Forum (WEEF) (pp. 140–144), Kuala Lumpur, Malaysia. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85055423589&doi=10.1109%2fWEEF.2017.8467076&partnerID=40&md5=faa5384b25aa3119118f0c67c5ca2e1d (accessed on 30 August 2025). [CrossRef]
- Lavoie, J. R., & Daim, T. U. (2017, July 9–13). Technology readiness levels improving R&D management: A grounded theory analysis [Conference paper]. Portland International Conference on Management of Engineering and Technology (Vol. 2017-July, pp. 1–9), Portland, OR, USA. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85043455404&doi=10.23919%2fPICMET.2017.8125383&partnerID=40&md5=6b03ffb3110e094eb2b2437409d16c0d (accessed on 30 August 2025). [CrossRef]
- Liedtka, J., & Ogilvie, T. (2011). Designing for growth: A design thinking tool kit for managers (1st ed.). Illustrated. Columbia University Press. [Google Scholar]
- López-Franco, A. (2013). Roadmaps o ruta de itinerario como herramienta de planeación tecnológica [Master’s Thesis, ITESO, Tlaquepaque, Mexico]. [Google Scholar]
- Lynn, M. R. (1986). Determination and quantification of content validity. Nursing research, 35(6), 382–385. [Google Scholar]
- Martin, A. (2023). Introduction to an agile framework for the management of technology transfer projects. Procedia Computer Science, 219, 1963–1968. [Google Scholar] [CrossRef]
- Meso, P., & Jain, R. (2006). Agile software development: Adaptive systems principles and best practices. Information Systems Management, 23(3), 19–30. [Google Scholar] [CrossRef]
- Ministerio de Ciencia & Tecnología e Innovación de Colombia. (2022). Guía para la transferencia de tecnología (Guía No. 271022). Ministerio de Ciencia, Tecnología e Innovación. Available online: https://minciencias.gov.co/sites/default/files/271022_guia_para_la_transferencia_de_tecnologia.pdf (accessed on 30 August 2025).
- Munk, J. E., Sorensen, J. S., & Laursen, L. N. (2020, September 10–11). Visual boards: Mood board, style board or concept board? [Conference paper]. DS 104: 22nd International Conference on Engineering and Product Design Education (E&PDE 2020), Herning, Denmark. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85095446385&partnerID=40&md5=28993b01c60a85f5aeb24cb7aaf76e17 (accessed on 30 August 2025).
- Murray, A., & Scuotto, V. (2016). The business model canvas. Symphonya. Emerging Issues in Management, 3(1), 94–109. [Google Scholar] [CrossRef]
- Numprasertchai, S., & Igel, B. (2004). Managing knowledge in new product and service development: A new management approach for innovative research organisations. International Journal of Technology Management, 28(7–8), 667–684. [Google Scholar] [CrossRef]
- Office of the Executive Director for Systems Engineering and Architecture. (2023). Technology readiness assessment guidebook. Office of the Executive Director for Systems Engineering and Architecture. [Google Scholar]
- Office of the Secretary of Defense Manufacturing Technology Program Joint Service/Industry MRL Working Group. (2022). Manufacturing readiness level (MRL) deskbook. Office of the Secretary of Defense Manufacturing Technology Program Joint Service/Industry MRL Working Group. Available online: https://www.dodmrl.com/MRL_Deskbook_2022__20221001_Final.pdf (accessed on 22 September 2025).
- Okoli, C., & Pawlowski, S. D. (2004). The Delphi method as a research tool: An example, design considerations and applications. Information & Management, 42(1), 15–29. [Google Scholar] [CrossRef]
- Olechowski, A. L., Eppinger, S. D., Joglekar, N., & Tomaschek, K. (2020). Technology readiness levels: Shortcomings and improvement opportunities. Systems Engineering, 23(4), 395–408. [Google Scholar] [CrossRef]
- Patton, J. (2014). User story mapping: Discover the whole story, build the right product. O’Reilly Media. [Google Scholar]
- Paulus, P. B., & Kenworthy, J. B. (2019). Effective brainstorming. In P. B. Paulus, & B. A. Nijstad (Eds.), The oxford handbook of group creativity and innovation (pp. 287–306). Oxford University Press. [Google Scholar] [CrossRef]
- Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489–497. [Google Scholar] [CrossRef]
- Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30(4), 459–467. [Google Scholar] [CrossRef]
- Quist, J., & Vergragt, P. (2006). Past and future of backcasting: The shift to stakeholder participation and a proposal for a methodological framework. Futures, 38(9), 1027–1045. [Google Scholar] [CrossRef]
- Rubin, J., & Chisnell, D. (2008). Handbook of usability testing: How to plan, design, and conduct effective tests (2nd ed.). Wiley. Available online: https://www.wiley.com/en-us/Handbook+of+Usability+Testing%3A+How+to+Plan%2C+Design%2C+and+Conduct+Effective+Tests%2C+2nd+Edition-p-9780470185483 (accessed on 30 August 2025).
- Rueda, I., Acosta, B., & Cueva, F. (2020). Las universidades y sus prácticas de vinculación con la sociedad. Educação & Sociedade, 41, e218154. [Google Scholar] [CrossRef]
- Rutitis, D., & Volkova, T. (2021, January). Model for development of innovative ICT products at high-growth potential startups. In M. H. Bilgin, H. Danis, E. Demir, & C. D. García-Gómez (Eds.), Eurasian business and economics perspectives (pp. 229–241). Springer. Available online: https://ideas.repec.org/h/spr/eurchp/978-3-030-77438-7_14.html (accessed on 30 August 2025). [CrossRef]
- Sánchez-Serrano, S., Pedraza-Navarro, I., & Donoso-González, M. (2022). ¿Cómo hacer una revisión sistemática siguiendo el protocolo PRISMA? Usos y estrategias fundamentales para su aplicación en el ámbito educativo a través de un caso práctico. Bordón. Revista de Pedagogía, 74(3), 51–66. [Google Scholar] [CrossRef]
- Schumacher, A., Erol, S., & Sihn, W. (2016). A maturity model for assessing Industry 4.0 readiness and maturity of manufacturing enterprises. Procedia CIRP, 52, 161–166. [Google Scholar] [CrossRef]
- Steinmüller, K. (2023). The “Classic” Delphi. Practical challenges from the perspective of foresight. In M. Niederberger, & O. Renn (Eds.), Delphi methods in the social and health sciences: Concepts, applications and case studies (pp. 29–49). Springer Fachmedien Wiesbaden. [Google Scholar] [CrossRef]
- Szajnfarber, Z. (2014). Managing innovation in architecturally hierarchical systems: Three switchback mechanisms that impact practice. IEEE Transactions on Engineering Management, 61(4), 633–645. [Google Scholar] [CrossRef]
- Ulrich, K., Eppinger, S., & Yang, M. C. (2020). Product design and development (7th ed.). McGraw Hill Education. [Google Scholar]
- Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing and Health Sciences, 15(3), 398–405. [Google Scholar] [CrossRef]
- Val Jauregi, E., Justel Lozano, D., & Cornes Larrauri, U. (2006). Fuzzy Front End de la innovación y el pensamiento divergente y convergente. In X congreso internacional de ingeniería de proyectos: Valencia, 13–15 septiembre 2006. actas (pp. 941–952). edUPV, Editorial Universitat Politècnica de València. [Google Scholar]
- Varl, M., Duhovnik, J., & Tavčar, J. (2022). Customized product development supported by integrated information. Journal of Industrial Information Integration, 25, 100248. [Google Scholar] [CrossRef]
- Yuan, C., Li, Y., Vlas, C. O., & Peng, M. W. (2016). Dynamic capabilities, subnational environment, and university technology transfer. Strategic Organization, 16(1), 35–60. [Google Scholar] [CrossRef]
- Yüksel, I. (2012). Developing a multi-criteria decision making model for PESTEL Analysis. International Journal of Business and Management, 7, 52. [Google Scholar] [CrossRef]
- Yusoff, M. S. B. (2019). ABC of content validation and content validity index calculation. Education in Medicine Journal, 11(2), 49–54. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).