Next Article in Journal
Joint Exploitation of Physical-Layer and Artificial Features for Privacy-Preserving Distributed Source Camera Identification
Previous Article in Journal
Generative Adversarial and Transformer Network Synergy for Robust Intrusion Detection in IoT Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Review of Cyber Range Taxonomies: Trends, Gaps, and a Proposed Taxonomy

DTU Compute, Technical University of Denmark, 2800 Kongens Lyngby, Denmark
*
Authors to whom correspondence should be addressed.
Future Internet 2025, 17(6), 259; https://doi.org/10.3390/fi17060259
Submission received: 5 May 2025 / Revised: 4 June 2025 / Accepted: 9 June 2025 / Published: 12 June 2025
(This article belongs to the Special Issue Security of Computer System and Network)

Abstract

:
Cyber ranges have become essential platforms for realistic cybersecurity training, research, and development. Existing taxonomies often describe the functional aspects of cyber ranges—scenario design, team configurations, and evaluation metrics— focusing less on the underlying technologies that enable modern training. In this paper, we highlight the current trends and persistent gaps in the existing literature and propose a taxonomy that decouples functional capabilities from the enabling infrastructure, adding a dedicated Technology dimension. We derived and refined this taxonomy through an iterative literature mapping process, culminating in a proposed taxonomy that highlights key emerging trends such as cyber–physical integration, federation across multiple sites, and AI-driven orchestration. Key findings include the identification of clear convergences and divergences in existing taxonomies and concrete recommendations for future research directions, such as integrating socio-technical considerations and conducting systematic empirical validation. Our framework aims to guide researchers, developers, and practitioners in designing, implementing, and comparing cyber range solutions. An online Taxonomy Mapping Toolkit is developed to allow the cyber ranges research community to take advantage of the proposed taxonomy and build upon it as new advancements emerge.

1. Introduction

Cyber ranges are technological platforms designed to simulate organizational network environments [1]. They provide interactive representations of digital systems, tools, and applications and serve as practical training and research environments, mainly for cybersecurity and defense professionals. When implemented appropriately, these platforms can become integral components of the infrastructure used for developing organizational and national digital resilience, enabling curated training against emerging cyber threats.
This introduction first contextualizes cyber ranges within contemporary cybersecurity practices, then identifies gaps and limitations in existing taxonomies, and subsequently defines the novel contributions of this research, as well as the structure of this paper.

1.1. Contextualizing Cyber Ranges

The ways cyber ranges can be used spans from awareness building and employee training to vulnerability discovery and security measures development, as well as hosting realistic cyber scenarios for hands-on educational courses that incorporate both physical and virtual components [2,3]. As cyber ranges offer a structured and adaptable approach to cybersecurity training, the increasing complexity of these digital ecosystems demands continuous research and methodological development in their application [4,5,6,7].
Over time, these environments have been referred to by different names, for example, testbeds or digital training platforms, highlighting the breadth of solutions that fall under the “cyber range” umbrella. In 2013, Davis and Magrath [8] surveyed early platforms and identified various labels such as attack lab and computer network operation (CNO) testbeds. Although these early platforms primarily focused on simulating isolated attack scenarios, they laid crucial groundwork for what has become an increasingly complex and comprehensive area. As the literature has expanded, the need for the categorization and classification of these environments has become apparent.

1.2. Overview and Limitations of Existing Taxonomies

Taxonomies help researchers and practitioners understand and analyze a domain by reducing complexity and identifying similarities and differences among objects [9,10]. Over the past decade, multiple taxonomies have been proposed to categorize and structure the functions and components of cyber ranges.
Among the most influential is the work by Yamin et al. [11], who delivered the first functional-layer taxonomy of cyber ranges. Their framework introduced six foundational dimensions: Scenario, Monitoring, Learning, Management, Teaming, and Environment. This taxonomy offered an organized examination of various capabilities, such as scenario design and user roles (e.g., Red vs. Blue teams), while treating underlying technologies like virtualization and orchestration as subcomponents rather than primary focus areas.
Building on Yamin et al.’s classification, Ukwandu et al. [12] extended the scope with two further dimensions—Econometrics and Recovery— thereby foregrounding cost–benefit analysis and post-incident processes as critical evaluation criteria. These additional dimensions started to tackle the broader question of how organisations can leverage cyber ranges for strategic cost–benefit analyses and system resilience.
Since then, several methods and frameworks have been proposed to categorize and visualize cyber ranges, reflecting the rapid development of the domain. A brief review of publications, as shown in Figure 1, reveals a substantial increase in cyber range studies over the past decade, signalling rising academic interest.
A series of systematic reviews have further shaped the research community’s understanding of cyber ranges. Studies by Chouliaras et al. [2], Russo et al. [13], and Ukwandu et al. [12], for instance, concentrated on scenario design and architecture, each recognizing that modern cyber ranges often require flexible infrastructures capable of replicating complex network conditions. Moreover, a handful of works have highlighted federated or distributed approaches, in which multiple cyber ranges are interconnected to support large-scale or cross-organizational training exercises [14,15,16,17,18,19,20,21,22,23,24,25]. Such trends underscore the increasingly collaborative and scalable nature of cyber ranges, while earlier frameworks tended to focus on single-site or self-contained environments.
Taken as a whole, these existing bodies of work illustrate the evolving landscape of cyber range research. Earlier research provided limited but foundational insights, while contemporary literature broadens the conversation to include advanced virtualization, orchestration, and economic considerations. Despite this expansion, many of the taxonomies in use today still lack a dedicated focus on technology as a separate, high-level dimension, an oversight that could neglect the fundamental infrastructure elements that shape each range’s capabilities. The present study builds on these identified strengths and gaps, employing a multiphase approach to refine and merge existing taxonomies into a comprehensive framework that captures both functional and technological aspects. To achieve the stated aims, we defined explicit research objectives (Section 2) to guide the systematic review and taxonomy development process.

1.3. Contribution of This Paper

The contribution of this paper is two-fold. First, this paper contributes to the body of literature by reviewing existing taxonomies, identifying their convergences and divergences, and proposing a refined framework. The goal is to ensure comprehensive coverage of the domain, encompassing both established dimensions—such as Scenario and Monitoring—and evolving ones, such as Technology. As a result, we propose a taxonomy that both maps current research and also reflects the field’s ongoing expansion.
Second, although we have tried to improve clarity and reduce redundancy, we recognize that this taxonomy is not exhaustive. Alongside the proposed taxonomy, this paper addresses the challenge of capturing constantly ongoing advancements in cyber range technologies. As part of the research process, a Taxonomy Mapping Toolkit was developed to allow the taxonomy to be expanded and modified. This will allow the cyber ranges research community to take advantage of this proposed taxonomy and build upon it as new advancements emerge. Consequently, our study consolidates existing schemes by synthesizing their convergences and divergences and introduces a unified taxonomy that covers both functional and technological dimensions, accompanied by an open-source Taxonomy Mapping Toolkit to facilitate its application and future extension.

1.4. Outline of This Paper

The remainder of this article is structured as follows. Section 2 outlines the materials and methods, detailing the systematic approach used to review the existing taxonomies and develop the proposed framework. Section 3 presents the results of this process, including the proposed taxonomy and its dimensions. Complementing this, Section 4 introduces the supporting software toolkit developed to facilitate taxonomy exploration and application. Section 5 offers a critical discussion, situating the proposed taxonomy within the broader context of cyber range research. Finally, Section 6 concludes the article, summarizing the contributions and outlining directions for future work.

2. Materials and Methods

2.1. Research Objectives and Research Questions

The primary research objectives of this study were as follows:
  • RO1: Systematically identify and analyze existing taxonomies and systematic literature reviews (SLRs) on cyber ranges published between 2014 and 2024.
  • RO2: Evaluate convergences and divergences within existing cyber range taxonomies.
  • RO3: Assess the impact of the recent literature (2019 to 2024) on existing cyber range taxonomies, determining whether an updated taxonomy is needed.
These objectives guided the formulation of the following research questions:
  • RQ1: What existing taxonomies and systematic literature reviews (SLRs) on cyber ranges were conducted between 2014 and 2024?
  • RQ2: Where do existing cyber range taxonomies converge and diverge?
  • RQ3: What influence have recent papers (2019 to 2024) had on current cyber range taxonomies, and is there a need for an updated taxonomy?

2.2. Methodological Framework

To address the research questions, we adopted a three-phase approach inspired by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) [26] guidelines. Each phase was iterative, allowing insights and gaps identified in earlier phases to directly shape subsequent searches and analyses. The choice of PRISMA stemmed from its rigor and capacity to structure both quantitative and qualitative insights [27]. In Phase I, we searched for literature featuring either a novel taxonomy of cyber ranges or a systematic review that examined cyber range characteristics. Building on the findings of Phase I, Phase II focused the search on the literature discussing cyber range architectures, scenarios, and functions from 2019 onwards, thereby covering newly emerging considerations such as advanced technologies or novel applications. Phase III served as the verification stage, during which the refined taxonomy was tested against the Phase II literature to confirm its coverage, resolve any contradictions, and ensure consistency. Each phase informed the subsequent one, ensuring that newly identified gaps or inconsistencies from Phase I were critically re-examined in Phase II and then systematically verified in Phase III. This iterative design allowed for greater confidence in the final proposed taxonomy as it has been repeatedly tested and updated at each step.
In developing our taxonomy, we relied on both an empirical inductive and a theoretical deductive approach. We began by examining existing taxonomies—observing empirical and factual cases to identify patterns. Through discussion and statistical techniques, we gathered the dimensions and characteristics agreed on by subject-matter experts and then reconciled any divergences that arose. Next, we drew on conceptual or theoretical foundations, guided by existing frameworks and the input of experts. In this step, we refined or modified the taxonomy derived in the inductive phase to ensure theoretical soundness. Combining these methods followed Bailey’s [10] three-level indicator model. It incorporated both “empirical to deductive” and “deductive to empirical” reasoning, ensuring that the final taxonomy remained grounded in real-world usage yet was also supported by established theory.
To ensure that our taxonomy was both valid and practical [9,10], we imposed two restrictions on every dimension:
  • Mutually exclusive: No single entity could have two different characteristics within the same dimension;
  • Collectively exhaustive: Every entity needed to fit exactly one characteristic in each dimension.
Beyond those restrictions, we focused on three guiding principles while conducting the research. Our taxonomy needed to be concise, including a manageable number of dimensions and characteristics. This was because overly complex schemes can exceed cognitive limits [9]. It needed to remain robust; each dimension needed to be sufficiently detailed to differentiate the entities of interest. A taxonomy with too few dimensions or characteristics risked failing to distinguish objects meaningfully. Finally, the taxonomy needed to be comprehensive, aiming for both empirical completeness—able to classify all known objects in the domain [28]—and conceptual completeness—including all dimensions relevant to the objects of interest, consistent with the idea that “typologies must provide complete descriptions of each ideal type” [28].
Additionally, the taxonomy design was designed to be extensible, meaning that new dimensions or characteristics can be added as technology and research on cyber ranges evolve. As Bailey [10] notes, static taxonomies quickly become obsolete. Through explanations, we ensured that the taxonomy can help identify where an object belongs, even if not every property is fully known [9,10].

2.3. Search Strategy and Information Sources

This study was based on two main databases, DTU Findit (All queries on DTU Findit were conducted after successful login to the database as students at DTU. The number of papers can vary significantly depending on whether the user has a valid student account.) (findit.dtu.dk) and Google Scholar (scholar.google.com). DTU Findit was selected for its comprehensive access to relevant academic journals and institutional subscriptions pertinent to cybersecurity research, while Google Scholar was included to broaden coverage, ensuring the identification of literature and scholarly works that might not be indexed in traditional academic databases. The Boolean search expressions evolved slightly from Phase I to Phase II to capture all aforementioned aspects. In Phase I, where the primary emphasis was on identifying any papers that either introduced a cyber range taxonomy or conducted a systematic review of existing taxonomies, the Boolean expression used was “cyber range” in title AND (“review” OR “taxonomy” OR “survey” in text). In Phase II, the scope included “technology,” “architecture,” and “infrastructure”, recognizing, based on the literature reviewed in Phase I, that these aspects would best help maintain the research focus and capture relevant recent developments, especially regarding how cyber range platforms are built, deployed, or integrated. The Boolean expression for Phase II was the following: “cyber range” in title AND (“technology” OR “architecture” OR “infrastructure” in text) In Phase III, the same keyword search was conducted as in Phase II to specifically map the technology, architecture, or infrastructure aspects once again.
In addition to database queries, backward snowballing [29] was employed at each phase to identify foundational works referenced by the most relevant articles, while forward snowballing helped to find newly published material citing key texts uncovered in earlier iterations. Expert recommendations also supplemented our search with one additional article, ensuring that this study incorporated seminal or high-impact works that might not have surfaced through database searches alone. (Step 1 was last searched and consulted on the 25th September 2024 and Step 2 + 3 on 31st December 2024. No other papers outside those information sources were searched or consulted).

2.4. Eligibility Criteria

Explicit inclusion and exclusion criteria were defined at each phase, inspired by the PRISMA framework, based on both study characteristics (content relevance, language, publication date) and report characteristics (availability of full text, empirical or theoretical grounding). For the initial phase (Phase I), focused on foundational taxonomy-oriented studies and systematic literature reviews (SLRs) published between 2014 and 2024, the inclusion and exclusion criteria are shown in Table 1. In Phase I, only studies published in English from 2014 onward that presented a novel taxonomy or a systematic literature review on cyber ranges were admitted. Phase II and III broadened the focus to incorporate studies with an explicit emphasis on cyber range architectures, scenarios, or technological underpinnings, while limiting the time-frame to articles published between 2019 and 2024 (Table 1). Throughout all phases, papers that did not contribute substantially to taxonomy-related discussions or the full texts of which remained inaccessible due to paywalls, even after institutional login, were excluded.

2.5. Selection Process

Following PRISMA 2020 guidelines [26], we began by merging all records from our database searches, after which, duplicates were identified and removed. Titles and abstracts were then examined to ascertain eligibility, guided by the criteria established for each phase. Throughout the selection process, team members independently screened subsets of records, followed by consensus meetings to resolve discrepancies. To enhance consistency and reduce subjective bias, two custom-built tools were introduced in Phases II and III, supporting automated keyword mapping and iterative taxonomy refinement. Detailed descriptions of these tools are provided in Section 4. Where abstracts provided insufficient clarity, the research team consulted the full text to resolve uncertainties.
Throughout Phase I, a standardized data extraction form was used to capture key information from each included paper, such as publication details (year, authors, venue), study design (quantitative, qualitative, or mixed methods), and explicit or implicit references to any cyber range dimensions. Particular attention was paid to whether a paper introduced new taxonomy elements or reinterpreted existing ones.
Figure 2 details the Phase I identification and screening process that yielded the core corpus for our scoping review. We located a total of 242 records through bibliographic databases (2 records), DTU Findit (24 records), and Google Scholar (216 records). Prior to screening, 28 duplicates and 18 pay-walled items were removed, leaving 194 titles and abstracts for initial relevance screening. Of these, 144 were excluded, primarily for lacking an explicit cyber range focus, so 50 full texts were sought and retrieved without loss. During eligibility assessment, 33 papers were excluded because they presented neither a cyber range systematic review nor a cyber range taxonomy, reducing the pool to 17. Backward-snowballing contributed one additional eligible study, bringing the final Phase I inclusion count to 18.
Phase II deepened this understanding by mapping each newly identified study onto the baseline taxonomy, refining certain dimensions or merging overlapping concepts.
Figure 3 follows the PRISMA 2020 convention and makes our Phase II screening process transparent. Two bibliographic sources—DTU Findit (159 records) and Google Scholar (196 records)—delivered the bulk of the material, while expert input and backward snowballing supplied 39 additional papers. After eliminating 91 duplicates, 36 pay-walled items and 36 records not including a taxonomy or literature review were excluded, and 196 titles and abstracts were screened. A further 14 items were removed for irrelevance at this stage, leaving 182 full texts for eligibility assessment; none failed retrieval. Of the supplementary 39 papers, 7 could not be obtained and 23 proceeded to eligibility checks. Ultimately, 206 studies satisfied all inclusion criteria and fed directly into the refinement of our taxonomy.
By Phase III, the research team had assembled a set of dimensions and sub-characteristics that reflected both well-established and emerging features in the cyber range literature. In synthesizing the findings, we maintained close alignment with the overarching research questions. RQ1 guided the compilation of existing taxonomies and SLRs, RQ2 orientated our assessment of convergent and divergent features, and RQ3 illuminated how new research could require further updates. The iterative process of data collection and mapping ensured that any gaps identified at an earlier stage were addressed by the time we reached the final proposed taxonomy.

2.6. Risk and Bias Assessment

Since many of the included studies used systematic literature review methods themselves, the research team classified them as having a lower risk of bias. Studies lacking explicit methodology were assigned a moderate risk classification and underwent extra scrutiny to confirm the originality or validity of their taxonomy contributions. In Phases II and III, the keyword mapping tool served as an additional safeguard, systematically identifying and highlighting relevant text segments. This approach reduced the likelihood of overlooking or subjective exclusion and offered a more uniform application of our inclusion criteria, especially across multiple reviewers.
Throughout each phase, we adhered to two central restrictions inspired by Nickerson et al. [9] and Bailey [10]: (1) mutual exclusivity, ensuring that no single dimension in the taxonomy overlapped in meaning with another, and (2) collective exhaustiveness, requiring that every identified object or characteristic fit naturally into at least one dimension. Furthermore, we adhered to conciseness, robustness, and comprehensiveness [28] to maintain a practical but thorough taxonomy.

2.7. Use of Generative AI

Generative AI tools (specifically, ChatGPT powered by OpenAI GPT-4o) were used during the writing process to assist with language editing, figure caption refinement, and the formatting of selected tables. No content was generated autonomously; all outputs were critically reviewed, edited, and verified by the authors to ensure accuracy and academic integrity.

3. Proposed Taxonomy

To identify prior and current research on cyber ranges, we started by focusing on papers that either (1) developed a cyber range taxonomy or (2) performed a systematic literature review (SLR) of cyber ranges. By doing so, we aimed to create a baseline understanding of how other researchers had categorized cyber ranges and leverage these insights when developing our own baseline taxonomy.
This chapter is organized into three main sections. The first one, Phase I: Taxonomy Reviews, Systematic Literature Reviews, and Baseline Taxonomy, explores existing cyber range taxonomies and systematic literature reviews and highlights where they converge or diverge, emphasizing their findings and contributions. These findings formed our baseline taxonomy. The second part of this chapter, Phase II: Refined Taxonomy, describes the targeted refinements and expansion of the baseline taxonomy after the second round of the literature review. Finally, in Phase III: Proposed Taxonomy, the results of our proposal of the new taxonomy are shown.

3.1. Phase I: Taxonomy Reviews, Systematic Literature Reviews, and Baseline Taxonomy

Following the methodology outlined in Section 2, we utilized two online research databases—DTU Findit and Google Scholar—to identify and collect relevant publications. In total, we included 18 studies in the final set for Phase I. In the sections below, we discuss these core studies—some focused on proposing or refining taxonomies while others offered systematic reviews. Where a paper had features of both a taxonomy and an SLR, we placed it in the section that best reflected its primary contribution.

3.1.1. Taxonomy Reviews

This section describes the key works that primarily proposed a cyber range taxonomy. Although some papers also had review elements, we classified them here because their central outcome was a taxonomy or framework for organizing cyber range elements.
Yamin et al.’s [11] paper, Cyber Ranges and Security Testbeds: Scenarios, Functions, Tools and Architecture, was one of the earliest comprehensive taxonomies we identified. The authors analyzed works from 2002 to 2018, focusing on both cyber ranges and cybersecurity testbeds (e.g., IoT, cyber–physical systems (CPSs), and SCADA). They initially presented a baseline taxonomy with five dimensions—Monitoring, Scoring, Management, Teaming, and Scenario—but eventually concluded that Environment deserved to be promoted from a characteristic to a stand-alone dimension, given its critical role in shaping cyber ranges’ functionalities. They also created a new dimension called Learning, placing Scoring under it as a characteristic. The final taxonomy by Yamin et al. [11] had these dimensions:
  • Scenario: Describes the life-cycle of the cybersecurity scenario (creation, generation, editing, and execution).
  • Monitoring: Covers the methods, dashboards, and tools used to track cyber range data or overall system health.
  • Learning: Defines how a cyber range supports tutoring and after-action analysis (including scoring).
  • Management: Concerns range management roles, interfaces, command and control, etc.
  • Teaming: Describes different teams (e.g., Red, Blue) that might participate in an exercise.
  • Environment: Focuses on the type of environment (e.g., physical, virtual) in which a cyber range is run, along with any event-generation tools.
As this paper combined cyber ranges and testbeds in a single taxonomy, much of the subsequent research built upon or refined this foundational work.
Ukwandu et al.’s [12] paper, A Review of Cyber-Ranges and Test-Beds: Current and Future Trends, was another foundational taxonomy found in our review. It covered studies from 2015 to 2020, arguing that the surge in recent publications may have affected Yamin et al.’s [11] earlier taxonomy. Ukwandu et al.’s [12] taxonomy was more granular, but it aligned with the work of Yamin et al. [11] on many core ideas. Key modifications included the following:
  • Type: Elevated to a dimension (Yamin et al. [11] treated it as a characteristic of Environment).
  • Econometrics: Added as a new dimension, focusing on the economic consequences of participant actions (e.g., cost impacts during scenarios).
  • Recovery: Introduced as its own dimension, ensuring that policies, patches, and post-incident measures were up to date.
  • Testbeds: Added as a distinct dimension, effectively creating a mini-taxonomy within the broader taxonomy to capture how testbeds fit into a cyber range.
Knüpfer et al.’s [30] paper, Cyber Taxi: A Taxonomy of Interactive Cyber Training and Education Systems, took a different route. Rather than a systematic review, it relied heavily on the authors’ practical experiences, summarized by the following quote: “Theoretical knowledge is good, practical proficiency is better.” ([30] p. 2). While they acknowledged Yamin et al.’s [11] taxonomy, they argued that it lacked audience-specific considerations such as target audience, proficiency level, and scoring. Despite its different origin, Knüpfer et al.’s [30] final taxonomy resembled the earlier ones, but highlighted the following:
  • Audience: Split into Sector, Purpose, Proficiency Level, and Target Audience.
  • Training Environment: Defined the type of cybersecurity training. This was where Scenario was categorized, contrasting with its dimension status in the work of Yamin et al. [11] and Ukwandu et al. [12].
Russo et al.’s [13] paper, Enabling Next-Generation Cyber Ranges with Mobile Security Components, addressed the lack of realistic mobile-device support in most cyber ranges. Their taxonomy built on Yamin et al.’s [11] work, but enhanced it to handle mobile devices. They divided the taxonomy into two main categories—Management and Training Environment.
  • Management
    • User Interface/API Gateway: Interfaces or APIs for easier interaction with management functions.
    • Exercise Design: Tools that facilitate building complex, heterogeneous infrastructures and produce key scenario events.
    • Competency Management: Conducts skill-gap analyses and user profiling and designs learning paths.
  • Training Environment
    • Scenario: Emphasizes the software/hardware needed to run a scenario; includes physical systems like IoT devices.
    • Support Services: Enhances realism by interacting directly with the Scenario dimension.
    • Toolset: Attack and defence tools for both Red and Blue teams.
In addition, they added Orchestration, which received inputs from Exercise Design, creating a sandbox to match specifications, and Data Collection, signifying data extraction (service status, logs, etc.) from the Training Environment.
The work by Glas et al. [31] was not strictly a taxonomy paper, but presented the TARGET framework for the Cyber Range Exercise (CRX) design. This framework built on Yamin et al.’s [11] taxonomy, so we include it here. Their systematic review of the CRX design literature (2016–2022) underpinned the framework, making it a hybrid approach using a partial SLR and partial taxonomy-based evaluation tool. Glas et al. [31] aimed to support evaluating CRX design more systematically, ultimately enriching the practical applications of earlier taxonomy work.
In summary, research in this domain is notably diverse; some taxonomies are built on comprehensive systematic reviews, others rely on practical or domain-specific insights, and still others combine both approaches. Yamin et al. [11] laid a broad foundation, covering major areas like Scenario, Monitoring, and Learning, while Ukwandu et al. [12] refined it by adding dimensions for Econometrics, Recovery, and Testbeds. From the rest of the reviewed works, Knüpfer et al. [30] emphasized target-audience factors and proficiency levels, while Russo et al. [13] enhanced Yamin et al.’s [11] model to better integrate mobile devices, adding dimensions like Orchestration and Data Collection. Finally, Glas et al. [31] created a framework for CRX design, informed by Yamin et al.’s [11] taxonomy, though it was not strictly a taxonomy in itself. Table 2 provides a structured overview, illustrating clearly how these key dimensions are represented across the selected taxonomies.

3.1.2. Systematic Literature Reviews

This section focuses on works whose main contribution was a systematic literature review (SLR) of cyber ranges. SLRs contributed by summarizing existing knowledge, clarifying established concepts, and identifying open research questions. A few of these SLR-focused papers also proposed high-level frameworks; however, they are categorized here because their primary contribution was reviewing literature rather than creating a new taxonomy. A summary of our findings is provided in Table 3.

3.1.3. Observed Trends and Gaps

Over the past few years, research has moved toward integrating cyber–physical elements (e.g., specialized hardware, IoT, mobile devices) to offer more realistic training scenarios. This integration of cyber–physical systems introduces new vectors of vulnerability, including side-channel attacks via wireless charging, the manipulation of sensors and voice assistants, and electromagnetic side-channel exploitation. Consequently, emerging defence mechanisms now utilize advanced approaches such as dynamic magnetic authentication, unsupervised anomaly detection, and continuous biometric verification (e.g., vibration or voice liveness detection) to counteract these threats. Recent studies [42,43,44,45,46,47,48,49] highlight these advancements, emphasizing the complexity and evolving nature of realistic cyber–physical scenarios relevant to cyber range implementations. Exercise design—especially the automation of attack/defense lifecycles—has also emerged as a key theme. However, some authors note a lack of comprehensive support for smaller or mobile devices. This gap is being addressed through enhancements like those discussed by Russo et al. [13]. The SLRs illustrate a growing sophistication in cyber range conceptualization, from purely virtual simulations to robust, hardware-integrated platforms capable of emulating real-world attacks and defence strategies.
In the final step of Phase I, we formulated our own baseline taxonomy, incorporating findings from both the taxonomy-oriented studies and the SLR-focused research.

3.1.4. Baseline Taxonomy

Based on our research purpose, we aimed at capturing the essential capabilities and structures of a cyber range. Drawing on the reviewed taxonomies and literature reviews, we identified two highly relevant, well-cited cyber range taxonomies: those of Yamin et al. [11] and Ukwandu et al. [12]. Due to its wide academic recognition, we started with the framework of Yamin et al. [11] and enriched it with relevant elements from Ukwandu et al.’s work [12]. In the process, we merged or simplified overlapping items to reduce redundancy. As a result, our baseline taxonomy comprised eight core dimensions:
  • Scenario;
  • Environment;
  • Econometrics;
  • Teaming;
  • Learning;
  • Monitoring;
  • Management;
  • Technology.
Below, we briefly describe each dimension in our baseline taxonomy and discuss how they appear in the literature. We highlight how certain dimensions—such as Scenario, Environment, and sub-characteristics—are used by different authors, ensuring clarity and reducing overlap among dimensions. Where certain sub-characteristics or “nodes” appeared as separate dimensions in other works, we clarify whether we (a) merged them into one of our core dimensions or (b) omitted them entirely because they overlapped significantly with existing dimensions.

Scenario

All the reviewed works included a Scenario dimension (although some authors used slightly different names or descriptions). Examples from the SLR examples include those by Katsantonis et al. [50], Lieskovan and Hajný [51], and Yamin and Katt [52]. In Katsantonis et al. [50], scenarios were concrete examples of attacks. In Lieskovan and Hajný [51], the authors discussed various scenario types, such as “Blue vs. Red” exercises or “Capture the Flag,” typically chosen based on the exercise’s purpose.
Unlike in some previous papers, in our baseline taxonomy, Exercise Design, Support Services, and Toolset were not stand-alone dimensions but formed characteristics of Scenario, based on how previous research characterized them. Because these sub-elements all related to how a scenario was planned, executed, and equipped, we omitted them as top-level dimensions and instead incorporated them into Scenario. The content of our baseline scenario dimension is shown in Table 4.

Environment

The Environment dimension refers to the technical and operational context in which scenarios are executed. Across the literature, this was variously labeled as Training Environment [13], Technical Setup, or simply Type [12]. Despite this terminological variation, the underlying concept consistently focused on replicating realistic conditions for cyber exercises.
We followed Yamin et al. [11] in subsuming Attack Type under the Environment dimension, since attacks are configured and enacted as part of the scenario’s technical context. As such, we did not include Attack Type as a separate top-level dimension in our taxonomy.
The Environment dimension encompassed the components that enable a scenario to run and behave realistically. We organized this dimension into two main characteristics, as shown in Table 5.
While the Management dimension governed when and how these elements were triggered, Environment addressed the what—the underlying technical means that made scenario execution possible.

Econometrics

The Econometrics dimension addresses the financial and economic aspects of cyber-attacks and defensive measures. While less frequently cited in the cyber range literature, it was explicitly discussed in works by Ukwandu et al. [12] and Tizio et al. [62]. We included it in our baseline taxonomy to reflect the growing interest among organizations in assessing cost impacts and return on security investments (ROI).
This dimension captured the intersection between technical decision-making and financial evaluation, enabling analyses such as threat forecasting, cost–benefit trade-off, and financial risk estimation. Although under-represented in our results—likely also due to a search emphasis on architectural and technical features—its inclusion offers value for business-oriented stakeholders. We structured Econometrics into five key characteristics, as shown in Table 6.
The Econometrics dimension thus extended the taxonomy to accommodate organizations that prioritize quantifying ROI or evaluating the financial implications of their cyber training strategies.

Teaming

The Teaming dimension refers to the organizational and role-based structure of participants in cyber range exercises. It was covered in several taxonomies [11,12,13,30] and commonly revolved around color-coded teams. By treating Teaming as a stand-alone dimension, we foregrounded its importance in structuring interactions, responsibilities, and objectives within a range scenario. Teaming captured both human and automated participants and their respective function, as shown in Table 7.

Learning

The Learning dimension appeared in several taxonomies, either as its own stand-alone category [11] or as part of a sub-branch, such as “education” in [12]. We defined it as encompassing both after-action analysis (e.g., gap analysis, scoring) and broader instructional support. Some authors, like Russo et al. [13], did not label Learning explicitly, instead referring to similar features under terms like training setup. In our taxonomy, we retained Learning as a distinct and clearly defined dimension to emphasize its pedagogical importance in cyber ranges. We identified four main characteristics, as shown in Table 8.

Monitoring

Monitoring spans through the entire lifecycle of a cyber range, tracking system performance, memory usage, logs, and participant activity. While some authors referred to related terms such as Data Collection [13] or Competency Management, these were typically sub-functions of monitoring in practice. Data collection refers to extracting artifacts such as logs or memory dumps, while competency management tracks participant performance and scoring. While partly a learning function, it also relies heavily on monitoring data.
In cases where papers divided monitoring tasks more granularly, we consolidated them into a single Monitoring dimension to avoid redundancy. This dimension was extensively covered in the literature, with over 80 references linking it to cyber range implementations. We organized Monitoring into six main characteristics, described in Table 9.
While Teams was included here as a sub-characteristic of Progress, to reflect tracked group performance, the broader Teaming dimension referred to role structure and exercise objectives, not monitoring outputs.

Management

Management is a core component in most cyber range taxonomies, overseeing the system’s operation across its entire lifecycle. This includes responsibilities such as resource handling, user roles, and scenario orchestration. In our framework, we merged two frequently mentioned but overlapping dimensions—User Interface/API Gateway and Orchestration—under the broader umbrella of Management. Yamin et al. [11] described user-facing interfaces and dashboards as part of the Management layer, enabling remote access and the control of cyber range functionalities. Similarly, actions such as creating, modifying, and updating scenarios—often referred to as orchestration tasks—were also consistently framed as management activities in the literature [11,75]. We therefore did not treat these as stand-alone dimensions but as functional components of Management. We identified six functional areas that fell under this dimension, as shown in Table 10.

Technology

Finally, Technology underpins all other dimensions by providing the infrastructure, simulation platforms, automation tools, and emerging solutions like AI or machine learning. We treated Technology as its own dimension so it could capture vital aspects such as simulation platforms for realistic environments, automation tools for scenario execution, scoring, system management, as well as AI, machine learning, or augmented reality solutions that enhance realism or interactivity. As a result of Phase I, we compiled the baseline taxonomy, shown in Figure 4.

3.2. Phase II—Refined Taxonomy

Our baseline taxonomy captured the principal dimensions found in earlier frameworks but left several emergent developments—particularly technological—only partly represented. To ensure that the taxonomy was grounded in a comprehensive and methodologically sound evidence base, we conducted a PRISMA-style search and screening procedure in Phase II. Below, we describe the changes implemented in Phase II of the analysis; dimensions unchanged after this round were not repeated.

Scenario

Two main refinements were necessary in the Scenario dimension. First Static and Dynamic were distinguished further under the Scenario Type characteristics, as they determined the structural logic of an exercise. Dynamic scenarios were henceforth divided into pre-defined branches—multiple predetermined paths activated by user choice [60]—and AI-driven paths in which a planning agent continually recalculated states and goals [63].
The literature also showed a marked rise in Cyber Range Exercise (CRX) design research. We therefore added a CRX Design characteristic under Scenario and situated the full scenario lifecycle (Creation, Editing, Deployment, Execution) within it. Difficulty Level and external Marketplace support—for example, the scenario repository built into the ECHO platform [89]—were also captured here.

Environment

The Environment Type now included Cyber--Physical systems, acknowledging ranges that combined operational technology (OT) with traditional IT. Because Physical and Hardware were found to be semantically indistinguishable in the literature, Physical was dropped and Hardware retained. A new Vulnerability Injection characteristic was added under Generation Tools to describe environments pre-seeded with known flaws [90,91].

Monitoring

Empirical mapping in Phase II showed that most monitoring activity clustered at the Network and Application layers; these two layers were therefore added explicitly beneath the Layer characteristic. The monitoring dimension now comprised five characteristics—Method, Dashboard, Layer, Performance, Progress. The Tools characteristic was removed as its contents largely overlapped with the newly populated Technology dimension.

Management

A separate Event Orchestration characteristic was introduced. Whereas Generation Tools in the Environment dimension described the artifacts that could be produced (traffic, users, attacks), Event Orchestration governed when and how such artifacts were instantiated during an exercise. The other Management characteristics (Lifecycle, Resource, Remote Access, Interface, Role, Command and Control) remained unchanged but were harmonized linguistically.

Technology

The Phase II systematic literature review yielded a fully developed Technology dimension for cyber ranges. In Phase I, this dimension was deliberately left empty to avoid imposing assumptions about its eventual contents. Instead, our goal was to let empirical evidence guide its construction. As the team read and analyzed research on cyber ranges, we documented every relevant technological aspect mentioned across the literature. Following the first review cycle, these observations were consolidated into a set of emerging technological characteristics.
A recurring pattern across the literature was the distinction between a cyber range’s capabilities—what could be done on the range, such as simulation or scoring—and its technologies—the underlying means by which these capabilities were implemented. Earlier taxonomies tended to emphasize the former, not distinguishing the implementation layer. Our intention in this phase was to address that imbalance by capturing these technological enablers. The added Technology dimension with its characteristics is shown in Table 11.
  • Connection Technologies
    Technologies such as VPN, TCP/IP, and peer-to-peer (P2P) links enable secure communication between components and serve as foundational infrastructure for both stand-alone and federated cyber ranges.
  • Virtualization Layer
    This characteristic encompasses the different methods used to simulate or emulate computing environments [4,84]. We distinguished between Traditional Virtual Machines (VMs), Containerization, and Cloud Virtualization (including public, private, and hybrid models):
    • Cloud Virtualization: Uses public or private cloud platforms (e.g., AWS, Azure) to provide virtual machines, storage, and network resources on demand [93,100].
    • Hybrid Cloud: Combines the scalability of public infrastructure with the control and security constraints of a private cloud [35].
    • Containerization: Employs tools such as Docker and Kubernetes to create lightweight, rapidly deployable environments suitable for high-tempo training scenarios [62,94,97].
    • On-Premise Virtualization: Some cyber ranges are hosted entirely on internal hardware, typically in military or critical-infrastructure contexts, where maximum control is required [92,99].
    • Traditional Virtualization: Involves hypervisor-based virtual machines that allow multiple isolated operating systems to run concurrently on a single physical machine [79].
    • SaaS Integration: Software-as-a-Service solutions enable access to pre-configured training environments without requiring extensive local setup [79,101].
  • System Architecture
    This characteristic reflects how the components of a cyber range are structurally organized. Across the literature, three primary architectural models were discussed: Monolithic, Microservice-based, and Federation-based:
    • Federation: Federation serves both as an architectural style and a technical configuration. In the context of cyber ranges, it typically involves linking multiple geographically or organizationally distinct systems. Virág et al. [92] outlined four connectivity models: (1) Layer 1 physical interconnection, (2) Layer 2 datalink interconnection, (3) Software-Defined WAN (SD-WAN), and (4) Layer 3 logical interconnection, with the ECHO H2020 project identifying the latter—specifically a client-to-server VPN model—as the most scalable and practical solution for federated cyber ranges.
    • Monolithic: This design consolidates all services into a single deployable unit. While this simplifies maintenance for small-scale deployments, it becomes less efficient at scale [95].
    • Microservice: A more modular approach, where individual functions are developed and deployed independently. Microservice architectures support elasticity and scalability but require more complex orchestration and monitoring [101,102].
  • Artificial Intelligence
    AI technologies are gradually being integrated into cyber ranges, particularly in the domains of automation, threat detection, and adaptive learning.
    • Conventional Machine Learning (ML) and Deep Learning: These techniques are used for tasks such as digital twin generation and vulnerability detection [62,96].
    • Large Language Models (LLMs): As a part of the now rapidly evolving field, LLMs are beginning to be used for log summarization and scenario narration.
  • Security Tools
    These include both offensive and defensive tools, such as intrusion detection systems (e.g., Snort), deep packet inspection modules, and honeypots. These tools are typically made available to participants as part of exercise scenarios [97].
  • Monitoring Tools
    This characteristic was moved to Technology from Monitoring as the technologies track both the technical health of the cyber range and user activity during exercises. Tools such as Zeek and ELK provide visibility into network traffic, system logs, and user behavior, supporting real-time scoring and after-action analysis [94].
  • Database
    This characteristic encompasses the storage solutions used for scenario data, telemetry, vulnerability records, and user performance logs. Depending on the use case, relational, document-based, or vector databases may be employed [99].
  • Orchestration Tools
    Orchestration tools automate the deployment, configuration, and tear-down of cyber-range scenarios. TOSCA and CRACK support declarative infrastructure-as-code approaches, while Kubernetes orchestrates container-based environments. These tools are crucial for enabling repeatability and scalability in complex training scenarios [96,98].
In sum, the Technology dimension comprised a diverse set of implementation-oriented characteristics, ranging from low-level connectivity and virtualization to advanced orchestration and artificial intelligence. By distinguishing between capabilities (what a cyber range can do) and the technologies (how those capabilities are implemented), our taxonomy offers a more complete picture of the architectural and technical foundations of cyber range systems.

3.3. Phase III—Proposed Taxonomy

The third phase served as both verification and finalization, applying the refined taxonomy to all previously included papers to confirm coverage and the mutual exclusivity of each dimension. Building on the baseline taxonomy established in Phase I and refined in Phase II, the third and final phase of our research (Phase III) thereby sought to verify a final updated taxonomy. The second refinement cycle resolved most structural inconsistencies identified during the initial mapping of sources; Table 11 summarizes the Technology dimension as retained at the end of Phase II. Phase III consolidated and removed superfluous characteristics, as detailed below in Table 12. The proposed taxonomy presented in this chapter therefore pursued a final round of simplification and abstraction, guided by three design criteria:
  • Conceptual necessity: A characteristic was retained only if it captured variability demonstrably discussed in the literature.
  • Non-redundancy: Overlap between dimensions was eliminated wherever possible.
  • Longevity: Items that risked rapid obsolescence (e.g., specific operating systems) were omitted in favor of higher-level abstractions.
Table 12. Proposed cyber range taxonomy (Phase III): dimensions, characteristics, and sub-characteristics.
Table 12. Proposed cyber range taxonomy (Phase III): dimensions, characteristics, and sub-characteristics.
DimensionCharacteristicSub-characteristics
TypeStatic; Dynamic (AI Based, Pre-defined)
Storyline
PurposeCollaboration; Awareness; Skills; Experimentation; Security Testing
CRXMarketplace; Level; Lifecycle (Creation; Editing; Deployment; Generation; Recovery; Execution)
Domain
ScenarioGamification
Sector
EnvironmentTypeSimulation; Emulation; Hybrid; Cyber–Physical; Hardware; Federation
GenerationUser; Traffic; Attack; Vulnerability Injection
TeamingTypeBlue; Red; Other
AgentAI-Based; Autonomous
LearningTutoring
After-Action Analysis
Scoring
Type
Method
Method
Performance
Dashboard
MonitoringProgressActions; Inputs; Path; Teams; Users
LayerNetwork; Application
Resource
Role
Interface
ManagementRemote Access
Command and Control
Data Storage
Meta Data
ToolsSecurity; Monitoring; Orchestration
VirtualizationTraditional: Virtual Machine (Hypervisor); Conventional: Containerization; Cloud: SaaS, Public Cloud, Private Cloud, Hybrid
TechnologyFederation
On-Premise
AIConventional ML; Deep Learning; LLM

Key Modifications

As the goal for Phase III was mainly to verify the structure of the Technology dimension, the majority of modifications in Phase III concerned this dimension. Two items—Operating System and Connection—were deemed conceptually superfluous. Every cyber range host necessarily runs an operating system, and every distributed system relies on basic network protocols such as TCP/IP or VPN tunneling. Their explicit inclusion therefore violated the non-redundancy criterion.
All software artifacts deployable within a cyber range—whether for orchestration, monitoring, or protection—were now housed under a single characteristic, Tools. This merger removed the prior duplication between the Monitoring, Management, and Technology dimensions and foregrounded the commonality of such artifacts as installable, configurable components.
The earlier distinction between “conventional” and “traditional” virtualization was also removed. Instead, the final taxonomy now differentiated between three virtualization approaches: Traditional Virtual Machines, which rely on hypervisor-based architectures; Containerization, exemplified by tools such as Docker, LXC, and Kata; and Cloud Virtualization, which refers to deployment on public, private, or hybrid cloud infrastructures.
Our Phase III literature review showed that the monolithic vs. microservice distinction seldom influenced cyber range design decisions in isolation; they were, rather, manifestations of broader engineering practices. Consequently, the Architecture characteristic was removed. Federation, however, was retained—now directly under Technology—because it governed cross-site resource pooling and latency management and was explicitly discussed in the literature.
To conclude the results of our three-phase systematic review of 206 studies published between 2014 and 2024, we synthesized a seven-dimension taxonomy: Scenario, Environment, Teaming, Learning, Monitoring, Management, and Technology, as shown in Figure 5.

4. Taxonomy Mapping Toolkit

During the development of the taxonomy, it became evident that no existing tool fully supported the requirements for collaborative, iterative, and transparent taxonomy refinement. Initial attempts to manage taxonomy development through spreadsheets proved inadequate, particularly as the taxonomy increased in complexity. Developing and validating three successive cyber range taxonomies demanded a workflow that combined the (i) collaborative editing of a large hierarchical structure and (ii) remapping of >200 full-text records whenever the structure changed.
Consequently, two custom tools were developed: the Taxonomy Mapping Tool (TMT) and the Automated Literature Analysis Tool (ALAT). TMT addressed the need for collaborative taxonomy construction and dynamic literature mapping, while ALAT assisted with the efficient identification of relevant literature passages based on pre-defined search terms.

4.1. Taxonomy Mapping Tool (TMT)

The Taxonomy Mapping Tool was developed with five primary objectives:
  • Mapping literature to taxonomy elements: To allow for the seamless linking of records to specific elements in a taxonomy, ensuring that the taxonomy could be verified against existing research.
  • Gathering and organizing data: To provide a structured way to capture data for each taxonomy element and allow for easy navigation and visualization.
  • Collaboration and consistency: To create a uniform platform where team members could collaboratively map records to the taxonomy in a standardized way, avoiding discrepancies and ensuring consistency.
  • Exposing the taxonomy to the research community: To make the taxonomy and the mapped records accessible to the cyber security community, enabling other researchers to inspect the records, validate the study group findings and contributions, and create customized versions of a taxonomy for their own research purposes.
  • Exporting the taxonomy and literature to spreadsheet file format: To allow tool users to export the taxonomy and its associated literature into a structured spreadsheet file format, ensuring that the taxonomy could be shared, analyzed offline, and integrated with other workflows while maintaining a clear and structured format for the data.
The taxonomy is visualized using a radial layout as an interactive tree structure, as shown in Figure 6.
Key features of the TMT tree visualization include the following:
  • Expand/collapse nodes: Parent nodes can be expanded to reveal their child elements or collapsed to simplify the view. For example, the root node Cyber Range has multiple child nodes such as Scenario, Environment, and Teaming.
  • Interactive controls: Each node includes interactive icons:
    (i)
    An information icon is a clickable button that can be used to view detailed descriptions and map records.
    (+)
    The plus icon is a clickable button that can be used to add child elements.
    (M)
    The modal icon is a clickable button that can be to manage literature references.
  • Color coding: Each node is assigned a distinct color to differentiate categories visually. For instance, in Figure 6, the dimension Scenario is colored orange, Environment is colored red, and Teaming is colored green. Using colors enhances readability and helps users quickly identify and group related elements.
  • Dynamic sizing: Nodes dynamically adjust their size based on the length of their labels, ensuring that the text remains legible.
Users can then create new taxonomy elements and edit existing ones through a graphical interface. The TMT also has an expandable sidebar where users can add literature and select literature to map to the taxonomy. The sidebar consists of two parts: The first part contains a form for adding papers, while the second part contains the added literature, where the literature can be searched, edited, or deleted.
When adding new records, the user must enter the title of the paper, the author(s), the date of publication, a URL, and a note summarizing the literature. All but the note field is required when adding new records. To save the changes, the user must click the blue Add Literature button. The records (i.e., literature) are then added to the list in the sidebar. To return to the normal state, where it is possible to add records, etc., the user must click on the Deselect Literature button.
The TMT also supports real-time collaboration using a set of back-end cloud computing services from Google Firebase, enabling multiple users to work on the exact taxonomy simultaneously. Changes made by one user are immediately reflected by all other users, ensuring consistency and improving productivity.
The last key feature of TMT is the ability to export a taxonomy and its associated records into a Microsoft Excel (spreadsheet) file. This functionality allows users to analyze, share, and customize the taxonomy offline. The exported spreadsheet file consists of two sheets, as shown in Table 13 and Table 14. The Categories sheet contains the following columns: Level, indicating the depth of a taxonomy element in the hierarchy; Name, specifying the name of the element; Description, providing a brief summary of the element; Literature ID, listing the literature IDs mapped to the element; and Path, which records the full hierarchical path of the element.
The Literature sheet contains the following columns: ID, which provides a unique identifier for each reference; Title, indicating the title of the reference; Author, listing the author(s); Date, specifying the publication date; and URL, which includes a link to the source.
The TMT was licenced as open-source, with its source code publicly available on an online repository at GitHub https://tinyurl.com/5xwv47ft (accessed on 8 June 2025). In summary, the TMT provides an online platform for managing and interacting with taxonomies. Its interactive tree visualization, modal-based information management, and real-time collaboration make it a solution for taxonomy creation and literature mapping. The Microsoft Excel export feature further enhances its usability by enabling offline analysis and data sharing between team members. By hosting the tool online and making the repository publicly available, the study group ensures that the tool is accessible and adaptable, encouraging its use and further development by the broader cybersecurity community.

4.2. Automated Literature Analysis Tool (ALAT)

In addition to the TMT, during the iterative taxonomy development, the study group encountered instances where changes required re-mapping the literature to ensure consistency. This process proved to be both time-consuming and prone to oversight. The need for a tool in the toolkit to streamline this process led to the creation of the Automated Literature Analysis Tool (ALAT).
The ALAT automates searching for relevant terms in a folder of PDF documents and outputs the results in a structured Microsoft Excel file. While it does not replace manual validation, ALAT is a guideline for highlighting relevant terms and linking them to specific documents. It was developed with three primary objectives:
  • Automating term identification: To streamline the process of identifying relevant terms in an extensive collection of PDF documents based on a set of pre-defined search terms.
  • Summarizing occurrences: To provide a structured and concise summary of the frequency and distribution of these terms across all analyzed documents, enabling better insights into their relevance.
  • Guiding manual review: To serve as a guideline for researchers by highlighting key terms within each record, allowing for faster and more focused manual validation.
ALAT inputs a Microsoft Excel file, where each row specifies a primary term and its associated sub-terms. ALAT then generates a new Microsoft Excel file with multiple sheets:
  • Literature list: Lists analyzed records with their reference numbers.
  • Summary: Summarizes main term and sub-term occurrences across all records.
  • Details: Provides detailed term counts for each record.
  • Word references: Maps terms to the records they appeared in.
While ALAT reduces the time required for record mapping, the output should not be taken as an absolute truth; instead, it serves as a guideline for manual review. Highlighting relevant terms and linking them to records enables faster, more focused analysis while still requiring human validation for accuracy.
Combined, these tools support the methodology outlined in this study by improving collaboration, enhancing transparency, and reducing manual overhead in taxonomy development. By releasing the tools as open-source software and hosting publicly available instances, the research team ensures that the broader cybersecurity community can inspect, reuse, and build upon our work.

5. Discussion

This study culminated in a seven-dimension cyber range taxonomy—Scenario, Environment, Teaming, Learning, Monitoring, Management, and Technology—that synthesized convergences and resolved divergences uncovered in 18 prior taxonomies and 206 additional studies reviewed during Phases I–III. The literature reviewed could be grouped into studies primarily focused on technical infrastructures and architectures, studies emphasizing pedagogical and training methodologies, and those centered around operational scenarios and exercise designs. The systematic literature reviews (SLRs) helped establish common foundational dimensions and highlighted evolving or under-explored areas, such as federated architectures and AI integration. As discussed in Section 1.1, the proposed taxonomy synthesized these findings and contributes directly to the current state of the art. The refinement process undertaken in this study integrated both functional and technical elements of cyber ranges.

5.1. Key Insights

Our work shows that cyber ranges have expanded well beyond basic testing platforms into scenario-rich environments where real-time events, role-based interactions, and advanced simulations drive training and experimentation. Indeed, Scenario remains a central focus across the literature, reflecting the importance of crafting realistic, dynamic exercises that mirror actual cyber threats. One of the most significant updates in our taxonomy is the creation of the Technology dimension, which addresses these foundational infrastructures—such as orchestration platforms, AI modules, virtualization tools, and networking architectures—underpinning modern cyber ranges. Our taxonomy therefore attempts to bridge capabilities (Scenario, Environment, etc.) with technology, which, in earlier taxonomies (e.g., Yamin et al. [11], Ukwandu et al. [12]), was dispersed across multiple dimensions.

5.2. Design Decisions

We argue that introducing generalized technology concepts (like virtualization, automation, scalability) still allows for adaptability without binding the taxonomy to specific tools (e.g., Docker, Chef). By formally introducing Technology as a stand-alone dimension, our taxonomy now explicitly covers the complex range of tools and systems that enable robust simulation, monitoring, and scaling capabilities.
In parallel, we opted to exclude testbeds from our taxonomy, acknowledging that their usage has evolved too broadly to pinpoint modern cyber range concepts effectively. Although this decision likely omitted certain historical works where “testbed” was synonymous with “cyber range,” it allowed us to keep our focus on core concepts surrounding Technology, Infrastructure, and Architecture.
Likewise, we chose a taxonomy rather than an ontology to maintain a relatively accessible hierarchical structure for practitioners. While an ontology could have captured deeper interrelationships (e.g., synergy between AI-based agents and scenario design), it would also have introduced another level of complexity. Throughout this process, our findings consistently underscored the dynamic nature of cyber ranges, driven both by technological advances and evolving training demands.
The question of whether to include Econometrics in our proposed taxonomy was one of the dilemmas of this study. On the one hand, financial and economic considerations—such as cost–benefit analyses, return on investment, and budgetary constraints—can significantly influence how organizations adopt cyber range solutions [12]. On the other hand, our scoping review revealed that these factors appear only sporadically in the literature, with insufficient empirical grounding or standardized methodologies to measure them. Moreover, Econometrics represents just one facet of a broader socio-technical landscape, encompassing political, organizational, and human behavioral factors. Including Econometrics as the sole socio-technical dimension might skew classification unduly toward financial considerations while overlooking important influences like policy, governance, and workforce structures. Consequently, to maintain a focused, technologically grounded taxonomy, we opted to exclude Econometrics from the final framework, prioritizing the more widely covered technological and functional dimensions. At the same time, we acknowledge that financial modeling and cost–benefit analyses are promising avenues for future work, potentially informing how resource constraints, risk assessments, and long-term sustainability factor into cyber range design and implementation.
Finally, the iterative approach that led from an initial baseline taxonomy to the present proposed taxonomy illustrates how classification efforts in this domain are both systematic and exploratory. Consolidating Technology under a separate dimension greatly reduced duplication and yet introduced new challenges, including naming overlaps (e.g., how “tools” appeared under Scenario, Monitoring, and Technology) and a higher level of abstraction in certain areas. Such trade-offs are inherent in taxonomy development and point to the need for ongoing re-validations as the cyber range landscape continues to evolve technologically, organizationally, and regulatory-wise.

5.3. Emerging Trends

Our analysis identified several notable shifts influencing cyber range development and utilization. Firstly, virtualization technologies continue to diversify, moving away from traditional monolithic architectures toward flexible, containerized micro-ranges and hybrid-cloud deployments. This evolution reflects a broader trend toward scalability and adaptability, particularly for large-scale, national-level exercises.
Secondly, the role of AI in cyber range environments is growing from exploratory proof-of-concept stages to integrated functionalities. Although recent studies [35,38] illustrate promising AI-driven adaptations in scenario generation and Red team automation, critical challenges remain unresolved. Limited high-quality training data, opaque decision-making processes hindering post-exercise analyses, and the potential for overly autonomous systems to surpass learners’ comprehension highlight the necessity for further research into explainable and pedagogically aligned AI systems.
Finally, federation is increasingly recognized as a preferred architecture for multi-organizational training. Recent implementations emphasize balancing resource sharing with organizational autonomy, employing peer-to-peer VPN or message-bus infrastructures. Despite clear benefits in scenario expansion and collaborative training, federation introduces substantial complexity in orchestration interoperability, latency management, and maintaining the trustworthiness of shared telemetry data. The proposed taxonomy’s newly articulated Federation sub-characteristic explicitly addresses these complexities, assisting stakeholders in aligning architectural choices with specific training objectives and constraints.

5.4. Organizational and Training Implications

The updated Teaming and Learning dimensions acknowledge that cyber ranges are no longer used solely for Red-versus-Blue drills. Autonomous agents, AI-augmented coaches, and fine-grained progress dashboards enable truly adaptive learning paths.

5.5. Addressing the Research Questions

RQ1: What existing taxonomies and systematic literature reviews on cyber ranges were conducted between 2014 and 2024?
Our review identified 18 primary taxonomies plus 206 supporting studies, confirming wide agreement on seven foundational dimensions while exposing conceptual drift in advanced areas such as AI and federation. The literature review identified several foundational taxonomies, most notably from Yamin et al. [11] and Ukwandu et al. [12], which form the backbone of scholarly discourse in this area. More recent systematic reviews broadened the scope to include scenario design [12,13], federation [14,15,16,17,18,19,20,21,22,23,24,25], and advanced orchestration tools [63,64,103]. These collectively show how the domain evolved from basic, VM-centric testbeds to sophisticated environments incorporating diverse dimensions (e.g., Monitoring, Management, Teaming).
RQ2: Where do existing cyber range taxonomies converge and diverge? Convergence is strongest around Scenario, Environment, and Learning. Divergences emerge around how to treat advanced topics (AI-driven agents, container-based architectures) and whether to include socio-technical or economic factors. For instance, Yamin et al. [11] treated technology and environment under a single umbrella, whereas Ukwandu et al. [12] introduced separate dimensions for Econometrics and Recovery. These divergences highlight the need for a flexible yet comprehensive taxonomy that captures both well-established and emerging elements of cyber range design.
RQ3: What influence have recent papers had on current cyber range taxonomies, and is there a need for an updated taxonomy? The recent literature underscores the fast-paced evolution of cyber range technologies, particularly around federated architectures, containerization, and AI. The acceleration of these technologies, as well as federated range topologies, justifies the need for an updated taxonomy. Our findings confirm that a stand-alone Technology dimension is critical, warranting an updated taxonomy that recognizes the complexity of modern infrastructures.

5.6. Limitations and Unresolved Issues

Several limitations identified through our analysis offer opportunities for future research. Firstly, we identified a gap regarding econometric considerations within the cyber range literature. Despite its importance in decision-making, cost–benefit analyses remained sporadic and inconsistent in the literature, making the formalization of an Econometrics dimension difficult and limiting comparability across studies. Moreover, the current taxonomy predominantly captures technical aspects, not addressing socio-technical components such as governance structures, organizational cultures, and regulatory alignments. The explicit integration of these policy-related and human-factor dimensions represents a valuable next step toward the comprehensive socio-technical analysis of cyber ranges.
Additionally, our systematic review may contain inherent biases due to methodological constraints. Although extensive database searches and backward snowballing enhanced coverage, limitations related to language exclusivity and restricted access to pay-walled literature may have excluded potentially relevant grey literature and studies, thereby affecting comprehensiveness.
Finally, the rapidly evolving technological landscape poses a continuous challenge. Static classifications risk obsolescence as new developments emerge. Consequently, regular updates and re-validation of the taxonomy, ideally leveraging tools like our open-source toolkit, are essential to maintaining its relevance and utility.

6. Conclusions and Future Work

The refinements introduced in this proposed taxonomy mark a concrete step forward in how cyber ranges are conceptualized. By building on foundational works such as those by Yamin et al. [11] and Ukwandu et al. [12], the taxonomy offers a clearer and more complete framework, one that covers both the capabilities of cyber ranges and the technologies that enable them. The main theoretical addition is the Technology dimension, which addresses an existing blind spot by outlining infrastructure components such as Virtualization, Connection Technologies, System Architectures, and Artificial Intelligence.
Beyond that, this work updates existing dimensions—Scenario, Environment, Monitoring, Teaming, Learning, and Management—to better reflect how cyber ranges are used today. Characteristics like Cyber–Physical Systems, Network Monitoring, and Orchestration Tools bring the taxonomy in line with current practices. Still, some gaps remain. Econometrics, for instance, was underdeveloped and therefore excluded. This points to a clear area for future research, one that takes economic and organizational factors into account.
Practically, the taxonomy gives developers, researchers, and educators a structured way to think about and build cyber ranges. Organizations can use the Scenario and Teaming dimensions to design relevant, role-based exercises, while Learning and Monitoring support feedback and performance tracking. The Technology dimension can guide with architectural decisions, for example, the use of containerization, distributed systems, or AI-based orchestration.
While the taxonomy remains grounded in technological and functional perspectives, cyber ranges operate within broader ecosystems that include policy, regulatory, and socio-technical dynamics. Compliance with frameworks such as NIS2 or sector-specific mandates influences scenario design, monitoring scope, and training objectives. Furthermore, human factors, organizational culture, and end-user behavior are increasingly recognized as critical vectors for vulnerability and resilience. Future iterations of this taxonomy may therefore benefit from incorporating dedicated dimensions to account for these socio-technical forces, strengthening the bridge between technical frameworks and real-world cybersecurity governance.

Future Work

Future research should look at linking dimensions introduced in our taxonomy, such as Technology and Management, to established theoretical frameworks in the broader cybersecurity, information systems, and organizational management literature. This will strengthen theoretical grounding and enhance the applicability of the taxonomy across different research domains.
Another valuable avenue for future research is to investigate how cyber range taxonomies align with established learning theories and knowledge development frameworks. This could further clarify how cyber ranges facilitate effective learning and skill acquisition in cybersecurity training contexts.
As this was not in the scope of the current study, future work should address empirical validation of the proposed taxonomy through practical case studies, as well as detailed assessments of the usability, impact, and effectiveness of the Taxonomy Mapping Toolkit in real-world research workflows. Collecting structured user feedback and conducting systematic validation will significantly enhance the applicability of our framework.
Additionally, incorporating socio-technical research represents an important direction for future work. Explicitly exploring how governance structures, organizational culture, regulatory frameworks (such as NIS2 and CER directives), and human behavioral factors influence the adoption, design, and effectiveness of cyber ranges will enrich the taxonomy. This integration can strengthen the connection between technical implementations and real-world cybersecurity resilience.
In conclusion, after three iterations and the use of custom tooling, the result is a proposed cyber range taxonomy that covers both capabilities and technology, is mapped to over 200 items of literature corpus, and is backed by open, reusable software. By bridging functional capabilities and technological advancements, the taxonomy proposed in this study provides a structured basis for future developments in cyber range theory and practice. In line with Nickerson et al.’s reminder that “useful taxonomies are moving targets” [9], both the taxonomy and the tool are released with the intention that they will evolve.

Author Contributions

P.L., J.K., A.S. and N.B.J. contributed equally to this work and share first authorship. Conceptualization, J.K., A.S., N.B.J., P.L. and N.D.; methodology, P.L., J.K., A.S., N.B.J. and N.D.; software, J.K., A.S. and N.B.J.; validation, P.L. and N.D.; investigation, A.S., J.K., N.B.J. and P.L.; data curation, J.K., A.S. and N.B.J.; writing—original draft preparation, A.S., J.K., N.B.J. and P.L.; writing—review and editing, P.L.; visualization, A.S., J.K., N.B.J. and P.L.; supervision, N.D. Generative AI tools (ChatGPT, OpenAI GPT-4o) were used to assist with language editing, table formatting, and drafting figure captions. All substantive content was created, reviewed, and approved by the authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding, and the APC was funded by the authors.

Data Availability Statement

The data supporting the findings of this study are available through the open-access Taxonomy Mapping Tool (TMT) https://tinyurl.com/5xwv47ft (accessed on 8 June 2025). This tool allows exploration, validation, and extension of the proposed taxonomy structure.

Acknowledgments

During the preparation of this manuscript, the authors used the generative AI tool ChatGPT (OpenAI GPT-4o) to assist with language editing, table formatting, and the drafting of figure captions. All AI-generated outputs were critically reviewed, edited, and verified by the authors, who take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ALATAutomated Literature Analysis Tool
CI/CDContinuous Integration/Continuous Delivery
CPSCyber–Physical Systems
CRCyber Range
CRATECyber Range and Training Environment (Sweden’s national cyber-training facility)
CRFCyber Range Federation
CRXCyber Range Exercise Design
CRUDCreate, Read, Update, and Delete
ECHOEuropean Network of Cybersecurity Centres and Competence Hub for
Innovation and Operations
IoTInternet of Things
LLMLarge Language Model
MLMachine Learning
OSOperating System
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
RQResearch Question
SaaSSoftware as a Service
SCADASupervisory Control and Data Acquisition
SLR Systematic Literature Review
SS Search String
TCP/IP Transmission Control Protocol/Internet Protocol
TMT Taxonomy Mapping Tool
URL Uniform Resource Locator
VPN Virtual Private Network

References

  1. National Institute of Standards and Technology. Cyber Rages; Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2018. [Google Scholar]
  2. Chouliaras, N.; Kittes, G.; Kantzavelou, I.; Maglaras, L.; Pantziou, G.; Ferrag, M.A. Cyber Ranges and TestBeds for Education, Training, and Research. Appl. Sci. 2021, 11, 1809. [Google Scholar] [CrossRef]
  3. Hallaq, B.; Nicholson, A.; Smith, R.; Maglaras, L.; Janicke, H.; Jones, K.; Hallaq, B.; Nicholson, A.; Smith, R.; Maglaras, L.; et al. CYRAN: A Hybrid Cyber Range for Testing Security on ICS/SCADA Systems, 1; IGI Global Scientific Publishing: Pennsylvania, PA, USA, 2018; ISBN 9781522556343. [Google Scholar] [CrossRef]
  4. Noponen, S.; Parssinen, J.; Salonen, J. Cybersecurity of Cyber Ranges: Threats and Mitigations. Int. J. Inf. Secur. Res. (IJISR) 2022, 12, 1032–1040. [Google Scholar] [CrossRef]
  5. Kampourakis, V.; Gkioulos, V.; Katsikas, S. A systematic literature review on wireless security testbeds in the cyber-physical realm. Comput. Secur. 2023, 133, 103383. [Google Scholar] [CrossRef]
  6. Kampourakis, V. Secure Infrastructure for Cyber-Physical Ranges. In Proceedings of the Research Challenges in Information Science: Information Science and the Connected World, Corfu, Greece, 23–26 May 2023; Nurcan, S., Opdahl, A.L., Mouratidis, H., Tsohou, A., Eds.; Springer: Cham, Swtzerland, 2023; pp. 622–631. [Google Scholar] [CrossRef]
  7. Kampourakis, V.; Gkioulos, V.; Katsikas, S. A step-by-step definition of a reference architecture for cyber ranges. J. Inf. Secur. Appl. 2025, 88, 103917. [Google Scholar] [CrossRef]
  8. Davis, J.; Magrath, S. A Survey of Cyber Ranges and Testbeds; Technical Report; Cyber Electronic Warfare Division DSTO Defence Science and Technology Organisation: Edinburg, SA, Australia, 2013. [Google Scholar]
  9. Nickerson, R.C.; Upkar, V.; Muntermann, J. A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 2013, 22, 336–359. [Google Scholar] [CrossRef]
  10. Bailey, K.D. Typologies and Taxonomies: An Introduction to Classification Techniques, nachdr. ed.; Number 102 in Sage University Papers Quantitative Applicatons in the Social SCIENCES; Sage Publ: Thousand Oaks, CA, USA, 2003. [Google Scholar]
  11. Yamin, M.M.; Katt, B.; Gkioulos, V. Cyber ranges and security testbeds: Scenarios, functions, tools and architecture. Comput. Secur. 2020, 88, 101636. [Google Scholar] [CrossRef]
  12. Ukwandu, E.; Farah, M.A.B.; Hindy, H.; Brosset, D.; Kavallieros, D.; Atkinson, R.; Tachtatzis, C.; Bures, M.; Andonovic, I.; Bellekens, X. A Review of Cyber-Ranges and Test-Beds: Current and Future Trends. Sensors 2020, 20, 7148. [Google Scholar] [CrossRef]
  13. Russo, E.; Verderame, L.; Merlo, A. Enabling Next-Generation Cyber Ranges with Mobile Security Components. In Proceedings of the Testing Software and Systems, Naples, Italy, 9–11 December 2020; Casola, V., De Benedictis, A., Rak, M., Eds.; Springer: Cham, Swtzerland, 2020; pp. 150–165. [Google Scholar] [CrossRef]
  14. Rouquette, R.; Beau, S.; Yamin, M.M.; Mohib, U.; Katt, B. Automatic and Realistic Traffic Generation In A Cyber Range. In Proceedings of the 2023 10th International Conference on Future Internet of Things and Cloud (FiCloud), Marrakesh, Morocco, 14–16 August 2023; pp. 352–358. [Google Scholar] [CrossRef]
  15. Du, L.; He, J.; Li, T.; Wang, Y.; Lan, X.; Huang, Y. DBWE-Corbat: Background network traffic generation using dynamic word embedding and contrastive learning for cyber range. Comput. Secur. 2023, 129, 103202. [Google Scholar] [CrossRef]
  16. Doussau, A.; Souyris, C.C.P.; Yamin, M.M.; Katt, B.; Ullah, M. Intelligent Contextualized Network Traffic Generator in a Cyber Range. In Proceedings of the 2023 17th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Bangkok, Thailand, 8–10 November 2023; pp. 9–13. [Google Scholar] [CrossRef]
  17. Saito, T.; Takahashi, S.; Sato, J.; Matsunoki, M.; Kanmachi, T.; Yamada, S.; Yajima, K. Development of Cyber Ranges for Operational Technology. In Proceedings of the 2023 8th International Conference on Business and Industrial Research (ICBIR), Bangkok, Thailand, 18–19 May 2023; pp. 1031–1034. [Google Scholar] [CrossRef]
  18. Kombate, Y.; Houngue, P.; Ouya, S. Securing MQTT: Unveiling vulnerabilities and innovating cyber range solutions. Procedia Comput. Sci. 2024, 241, 69–76. [Google Scholar] [CrossRef]
  19. Shin, Y.; Kwon, H.; Jeong, J.; Shin, D. A Study on Designing Cyber Training and Cyber Range to Effectively Respond to Cyber Threats. Electronics 2024, 13, 3867. [Google Scholar] [CrossRef]
  20. Kucek, S.; Leitner, M. Training the Human-in-the-Loop in Industrial Cyber Ranges. In Proceedings of the Digital Transformation in Semiconductor Manufacturing; Keil, S., Lasch, R., Lindner, F., Lohmer, J., Eds.; Springer: Cham, Switzerland, 2020; pp. 107–118. [Google Scholar] [CrossRef]
  21. Sharkov, G.; Odorova, C.T.; Koykov, G.; Nikolov, I. Towards a Robust and Scalable Cyber Range Federation for Sectoral Cyber/Hybrid Exercising: The Red Ranger and ECHO Collaborative Experience. Inf. Secur. 2022, 53, 287–302. [Google Scholar] [CrossRef]
  22. Braghin, C.; Cimato, S.; Damiani, E.; Frati, F.; Riccobene, E.; Astaneh, S. Towards the Monitoring and Evaluation of Trainees’ Activities in Cyber Ranges. In Proceedings of the Model-Driven Simulation and Training Environments for Cybersecurity (MSTEC), Guildford, UK, 14–18 September 2020; Hatzivasilis, G., Ioannidis, S., Eds.; Springer: Cham, Switzerland, 2020; pp. 79–91. [Google Scholar] [CrossRef]
  23. Ciuperca, E.; Stanciu, A.; Cîrnu, C. Postmodern Education and Technological Development. Cyber Range as a Tool for Developing Cyber Security Skills. In Proceedings of the INTED2021 Proceedings, 15th International Technology, Education and Development Conference 2021, Online Conference, 8–9 March 2021; pp. 8241–8246, ISBN 9788409276660. [Google Scholar] [CrossRef]
  24. Oh, S.K.; Stickney, N.; Hawthorne, D.; Matthews, S.J. Teaching Web-Attacks on a Raspberry Pi Cyber Range. In Proceedings of the 21st Annual Conference on Information Technology Education (SIGITE’20), Virtual, 7–9 October 2020; ACM: New York, NY, USA, 2020; pp. 324–329. [Google Scholar] [CrossRef]
  25. Shangting, M.; Quan, P. Industrial cyber range based on QEMU-IOL. In Proceedings of the 2021 IEEE International Conference on Power Electronics, Computer Applications (ICPECA), Shenyang, China, 22–24 January 2021; pp. 671–674. [Google Scholar] [CrossRef]
  26. Page, M.J.; Moher, D.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ 2021, 372, n160. [Google Scholar] [CrossRef]
  27. Okoli, C. A Guide to Conducting a Standalone Systematic Literature Review. Commun. Assoc. Inf. Syst. 2015, 37. [Google Scholar] [CrossRef]
  28. Bowker, G.C.; Star, S.L. Invisible Mediators of Action: Classification and the Ubiquity of Standards. Mind Cult. Act. 2000, 7, 147–163. [Google Scholar] [CrossRef]
  29. Wohlin, C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, (EASE’14). London, UK, 13–14 May 2014; pp. 1–10. [Google Scholar] [CrossRef]
  30. Knüpfer, M.; Bierwirth, T.; Stiemert, L.; Schopp, M.; Seeber, S.; Pöhn, D.; Hillmann, P. Cyber Taxi: A Taxonomy of Interactive Cyber Training and Education Systems. In Proceedings of the Model-Driven Simulation and Training Environments for Cybersecurity, Second International Workshop (MSTEC 2020). Guildford, UK, 14–18 September 2020; Hatzivasilis, G., Ioannidis, S., Eds.; Springer: Cham, Swtzerland, 2020; pp. 3–21. [Google Scholar] [CrossRef]
  31. Glas, M.; Vielberth, M.; Pernul, G. Train as you Fight: Evaluating Authentic Cybersecurity Training in Cyber Ranges. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI’23), Hamburg, Germany, 23–28 April 2023; ACM: New York, NY, USA, 2023; pp. 1–19. [Google Scholar] [CrossRef]
  32. Bergan, S.; Ruud, E.M. Otnetic: A Cyber Range Training Platform Developed for the Norwegian Energy Sector. Master’s Thesis, University of Agder, Kristiansand, Norway, 2021. [Google Scholar]
  33. Pavlova, E. Implementation of Federated Cyber Ranges in Bulgarian Universities: Challenges, Requirements, and Opportunities. Inf. Secur. 2021, 50, 149–159. [Google Scholar] [CrossRef]
  34. Oruc, A.; Gkioulos, V.; Katsikas, S. Towards a Cyber-Physical Range for the Integrated Navigation System (INS). J. Mar. Sci. Eng. 2022, 10, 107. [Google Scholar] [CrossRef]
  35. Ear, E.; Remy, J.L.C.; Xu, S. Towards Automated Cyber Range Design: Characterizing and Matching Demands to Supplies. In Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience (CSR), Venice, Italy, 31 July–2 August 2023; pp. 329–334. [Google Scholar] [CrossRef]
  36. Evans, M.; Purdy, G. Architectural development of a cyber-physical manufacturing range. Manuf. Lett. 2023, 35, 1173–1178. [Google Scholar] [CrossRef]
  37. Päijänen, J. Pre-assessing Cyber Range-Based Event Participants’mNeeds and Objectives, 2023. Master’s Thesis, JAMK: University of Applied Sciences, Jyväskylä, Finland, 2023. Available online: http://www.theseus.fi/handle/10024/816339 (accessed on 8 June 2025).
  38. Bistene, J.V.; Chagas, C.E.d.; Santos, A.F.P.d.; Salles, R.M. Modeling Network Traffic Generators for Cyber Ranges: A Systematic Literature Review; ACM: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  39. Shamaya, N.; Tarcheh, G. Strengthening Cyber Defense: A Comparative Study of Smart Home Infrastructure for Penetration Testing and National Cyber Ranges; KTH School of Engineering Sciences in Chemistry, Biotechnology and Health: Stockholm, Sweden, 2024. [Google Scholar]
  40. Boström, A.; Hylander, A. Selecting Better Attacks for Cyber Defense Exercises: Criteria to Enhance Cyber Range Content; Linköping University, Department of Computer and Information Science: Linköping, Sweden, 2024. [Google Scholar]
  41. Stamatopoulos, D.; Katsantonis, M.; Fouliras, P.; Mavridis, I. Exploring the Architectural Composition of Cyber Ranges: A Systematic Review. Future Internet 2024, 16, 231. [Google Scholar] [CrossRef]
  42. Ni, T.; Zhang, X.; Zuo, C.; Li, J.; Yan, Z.; Wang, W.; Xu, W.; Luo, X.; Zhao, Q. Uncovering User Interactions on Smartphones via Contactless Wireless Charging Side Channels. In Proceedings of the 2023 IEEE Symposium on Security and Privacy (S&P), San Francisco, CA, USA, 22–24 May 2023; pp. 3399–3415. [Google Scholar]
  43. Huang, W.; Chen, H.; Cao, H.; Ren, J.; Jiang, H.; Fu, Z.; Zhang, Y. Manipulating Voice Assistants Eavesdropping via Inherent Vulnerability Unveiling in Mobile Systems. IEEE Trans. Mob. Comput. 2024, 23, 11549–11563. [Google Scholar] [CrossRef]
  44. Ni, T.; Lan, G.; Wang, J.; Zhao, Q.; Xu, W. Eavesdropping Mobile App Activity via Radio-Frequency Energy Harvesting. In Proceedings of the 32nd USENIX Security Symposium, Anaheim, CA, USA, 9–11 August 2023; pp. 3511–3528. [Google Scholar]
  45. Ni, T.; Zhang, X.; Zhao, Q. Recovering Fingerprints from In-Display Fingerprint Sensors via Electromagnetic Side Channel. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS), Copenhagen, Denmark, 26–30 November 2023; pp. 1–14. [Google Scholar]
  46. Cao, H.; Liu, D.; Jiang, H.; Luo, J. MagSign: Harnessing Dynamic Magnetism for User Authentication on IoT Devices. IEEE Trans. Mob. Comput. 2024, 23, 597–611. [Google Scholar] [CrossRef]
  47. Cao, H.; Xu, G.; He, Z.; Shi, S.; Xu, S.; Wu, C.; Ning, J. Unveiling the Superiority of Unsupervised Learning on GPU Cryptojacking Detection: Practice on Magnetic Side Channel-Based Mechanism. IEEE Trans. Inf. Forensics Secur. 2025, 20, 4874–4889. [Google Scholar] [CrossRef]
  48. Cao, H.; Liu, D.; Jiang, H.; Cai, C.; Zheng, T.; Lui, J.C.S.; Luo, J. HandKey: Knocking-Triggered Robust Vibration Signature for Keyless Unlocking. IEEE Trans. Mob. Comput. 2024, 23, 520–534. [Google Scholar] [CrossRef]
  49. Cao, H.; Jiang, H.; Liu, D.; Wang, R.; Min, G.; Liu, J.; Dustdar, S.; Lui, J.C.S. LiveProbe: Exploring Continuous Voice Liveness Detection via Phonemic Energy Response Patterns. IEEE Internet Things J. 2023, 10, 7215–7228. [Google Scholar] [CrossRef]
  50. Katsantonis, M.N.; Manikas, A.; Mavridis, I.; Gritzalis, D. Cyber range design framework for cyber security education and training. Int. J. Inf. Secur. 2023, 22, 1005–1027. [Google Scholar] [CrossRef]
  51. Lieskovan, T.; Hajný, J. Building Open Source Cyber Range To Teach Cyber Security. In Proceedings of the Proceedings of the 16th International Conference on Availability, Reliability and Security (ARES’21), Vienna, Austria, 17–20 August 2021; ACM: New York, NY, USA, 2021; pp. 1–11. [Google Scholar] [CrossRef]
  52. Yamin, M.M.; Katt, B. Modeling and executing cyber security exercise scenarios in cyber ranges. Comput. Secur. 2022, 116, 102635. [Google Scholar] [CrossRef]
  53. Friedl, S.; Glas, M.; Englbrecht, L.; Böhm, F.; Pernul, G. ForCyRange: An Educational IoT Cyber Range for Live Digital Forensics. In Information Security Education—Adapting to the Fourth Industrial Revolution; Drevin, L., Miloslavskaya, N., Leung, W.S., von Solms, S., Eds.; WISE 2022; Springer: Cham, Switzerland, 2022; pp. 77–91. [Google Scholar] [CrossRef]
  54. Balto, K.E.; Yamin, M.M.; Shalaginov, A.; Katt, B. Hybrid IoT Cyber Range. Sensors 2023, 23, 3071. [Google Scholar] [CrossRef]
  55. Priyadarshini, I. Features and Architecture of The Modern Cyber Range: A Qualitative Analysis and Survey. Ph.D. Thesis, Cornell University, Ithaca, NY, USA, 2018. [Google Scholar]
  56. Yang, H.; Chen, T.; Bai, Y.; Li, F.; Li, M.; Yang, R. Research and Implementation of User Behavior Simulation Technology Based on Power Industry Cyber Range. In Proceedings of the 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 17–19 December 2021; Volume 2, pp. 284–287. [Google Scholar] [CrossRef]
  57. Smyrlis, M.; Somarakis, I.; Spanoudakis, G.; Hatzivasilis, G.; Ioannidis, S. CYRA: A Model-Driven CYber Range Assurance Platform. Appl. Sci. 2021, 11, 5165. [Google Scholar] [CrossRef]
  58. Tagarev, T.; Stoianov, N.; Sharkov, G.; Yanakiev, Y. AI-driven Cybersecurity Solutions, Cyber Ranges for Education & Training, and ICT Applications for Military Purposes. Inf. Secur. 2021, 50, 5–8. [Google Scholar] [CrossRef]
  59. Russo, E.; Ribaudo, M.; Orlich, A.; Longo, G.; Armando, A. Cyber Range and Cyber Defense Exercises: Gamification Meets University Students. In Proceedings of the 2nd International Workshop on Gamification in Software Development, Verification, and Validation (Gamify 2023), San Francisco, CA, USA, 4 December 2023; ACM: New York, NY, USA, 2023; pp. 29–37. [Google Scholar] [CrossRef]
  60. Diakoumakos, J.; Chaskos, E.; Kolokotronis, N.; Lepouras, G. Cyber-Range Federation and Cyber-Security Games: A Gamification Scoring Model. In Proceedings of the 2021 IEEE International Conference on Cyber Security and Resilience (CSR), Rhodes, Greece, 26–28 July 2021; pp. 186–191. [Google Scholar] [CrossRef]
  61. Chaskos, E.; Diakoumakos, J.; Kolokotronis, N.; Lepouras, G.; Chaskos, E.; Diakoumakos, J.; Kolokotronis, N.; Lepouras, G. Gamification Mechanisms in Cyber Range and Cyber Security Training Environments: A Review, 1; IGI Global Scientific Publishing: Pennsylvania, PA, USA, 2022; ISBN 9781668442913. [Google Scholar] [CrossRef]
  62. Tizio, G.D.; Massacci, F.; Allodi, L.; Dashevskyi, S.; Mirkovic, J. An Experimental Approach for Estimating Cyber Risk: A Proposal Building upon Cyber Ranges and Capture the Flags. In Proceedings of the 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Genoa, Italy, 7–11 September 2020; pp. 56–65. [Google Scholar] [CrossRef]
  63. Hannay, J.E.; Stolpe, A.; Yamin, M.M. Toward AI-Based Scenario Management for Cyber Range Training. In Proceedings of the HCI International 2021—Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence, Gothenburg, Sweden, 22–27 June 2021; Stephanidis, C., Kurosu, M., Chen, J.Y.C., Fragomeni, G., Streitz, N., Konomi, S., Degen, H., Ntoa, S., Eds.; Springer: Cham, Switzerland, 2021; pp. 423–436. [Google Scholar] [CrossRef]
  64. Sánchez, P.M.; Nespoli, P.; Alfaro, J.G.; Mármol, F.G. Methodology for Automating Attacking Agents in Cyber Range Training Platforms. In Proceedings of the Secure and Resilient Digital Transformation of Healthcare, Bergen, Norway, 25 November 2024; Abie, H., Gkioulos, V., Katsikas, S., Pirbhulal, S., Eds.; Springer: Cham, Switzerland, 2024; pp. 90–109. [Google Scholar] [CrossRef]
  65. Lieskovan, T.; Kohout, D.; Frolka, J. Cyber range scenario for smart grid security training. Elektrotech. Inftech. 2023, 140, 452–459. [Google Scholar] [CrossRef]
  66. Srinivas, K.S.; Suhas, M.; Srinath, P.; Sneha, K.C.; Narayan, D.G.; Somashekhar, P. CRaaS: Cyber Range as a Service. In Proceedings of the Innovations in Electrical and Electronic Engineering, New Delhi, India, 8–9 January 2022; Mekhilef, S., Shaw, R.N., Siano, P., Eds.; Springer: Singapore, 2022; pp. 565–576. [Google Scholar] [CrossRef]
  67. Damianou, A.; Mazi, M.S.; Rizos, G.; Voulgaridis, A.; Votis, K. Situational Awareness Scoring System in Cyber Range Platforms. In Proceedings of the 2024 IEEE International Conference on Cyber Security and Resilience (CSR), London, UK, 2–4 September 2024; pp. 520–525. [Google Scholar] [CrossRef]
  68. Lazarov, W.; Janek, S.; Martinasek, Z.; Fujdiak, R. Event-based Data Collection and Analysis in the Cyber Range Environment. In Proceedings of the 19th International Conference on Availability, Reliability and Security (ARES’24), Vienna, Austria, 30 July–2 August 2024; ACM: New York, NY, USA, 2024; pp. 1–8. [Google Scholar] [CrossRef]
  69. Park, M.; Lee, H.; Kim, Y.; Kim, K.; Shin, D. Design and Implementation of Multi-Cyber Range for Cyber Training and Testing. Appl. Sci. 2022, 12, 12546. [Google Scholar] [CrossRef]
  70. Xie, J.; Zhang, C.; Lou, F.; Cui, Y.; An, L.; Wang, L. High-Speed File Transferring Over Linux Bridge for QGA Enhancement in Cyber Range. In Artificial Intelligence and Security; Sun, X., Pan, Z., Bertino, E., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 11635, pp. 452–462. [Google Scholar] [CrossRef]
  71. Vekaria, K.B.; Calyam, P.; Wang, S.; Payyavula, R.; Rockey, M.; Ahmed, N. Cyber Range for Research-Inspired Learning of “Attack Defense by Pretense” Principle and Practice. IEEE Trans. Learn. Technol. 2021, 14, 322–337. [Google Scholar] [CrossRef]
  72. Hatzivasilis, G.; Ioannidis, S.; Smyrlis, M.; Spanoudakis, G.; Frati, F.; Braghin, C.; Damiani, E.; Koshutanski, H.; Tsakirakis, G.; Hildebrandt, T.; et al. The THREAT-ARREST Cyber Range Platform. In Proceedings of the 2021 IEEE International Conference on Cyber Security and Resilience (CSR), Rhodes, Greece, 26–28 July 2021; pp. 422–427. [Google Scholar] [CrossRef]
  73. Albaladejo-González, M.; Strukova, S.; Ruipérez-Valiente, J.A.; Gómez Mármol, F.L. Exploring the Affordances of Multimodal Data to Improve Cybersecurity Training with Cyber Range Environments. In Colección Jornadas y Congresos; Serrano, M.A., Fernández-Medina, E., Alcaraz, C., Castro, N.D., Calvo, G., Eds.; Ediciones de la Universidad de Castilla-La Mancha: Cuenca, Spain, 2021. [Google Scholar] [CrossRef]
  74. Karjalainen, M.; Kokkonen, T. Comprehensive Cyber Arena; The Next Generation Cyber Range. In Proceedings of the 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Genoa, Italy, 16–18 June 2020; pp. 11–16. [Google Scholar] [CrossRef]
  75. Sharkov, G.; Todorova, C.; Koykov, G.; Zahariev, G. A System-of-Systems Approach for the Creation of a Composite Cyber Range for Cyber/Hybrid Exercising. Inf. Secur. 2021, 50, 129–148. [Google Scholar] [CrossRef]
  76. Linardos, V. Development of a Cyber Range Platform. Master’s Thesis, University of Piraeus, Piraeus, Greece, 2021. [Google Scholar] [CrossRef]
  77. Hätty, N. Representing Attacks in a Cyber Range. Master’s Thesis, Linköping University, Linköping, Sweden, 2019. [Google Scholar]
  78. Wang, Y. An Attribution Method for Alerts in an Educational Cyber Range Based on Graph Database. Master’s Thesis, Kth Royal Institute of Technology, Stockholm, Sweden, 2023. [Google Scholar]
  79. Orbinato, V. A next-generation platform for Cyber Range-as-a-Service. In Proceedings of the 2021 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), Wuhan, China, 25–28 October 2021; pp. 314–318. [Google Scholar] [CrossRef]
  80. Ficco, M.; Palmieri, F. Leaf: An open-source cybersecurity training platform for realistic edge-IoT scenarios. J. Syst. Archit. 2019, 97, 107–129. [Google Scholar] [CrossRef]
  81. Lateș, I. Automating Cyber-Range Virtual Networks Deployment Using Open-Source Technologies (Chef Software). Econ. Inform. 2020, 20, 36–43. [Google Scholar]
  82. Kim, I.; Park, M.; Lee, H.J.; Jang, J.; Lee, S.; Shin, D. A Study on the Multi-Cyber Range Application of Mission-Based Cybersecurity Testing and Evaluation in Association with the Risk Management Framework. Information 2024, 15, 18. [Google Scholar] [CrossRef]
  83. Gustafsson, T.; Almroth, J. Cyber Range Automation Overview with a Case Study of CRATE. In Secure IT Systems; Asplund, M., Nadjm-Tehrani, S., Eds.; Springer International Publishing: Cham, Switzerland, 2021; Volume 12556, pp. 192–209. [Google Scholar] [CrossRef]
  84. Luise, A.; Perrone, G.; Perrotta, C.; Romano, S.P. On-demand deployment and orchestration of Cyber Ranges in the Cloud. In Proceedings of the ITASEC 2021: Italian Conference on Cyber Security, Online, 7–9 April 2022. [Google Scholar]
  85. Guerrero, G.; Betarte, G.; Campo, J.D. Tectonic: An Academic Cyber Range. In Proceedings of the 2024 IEEE Biennial Congress of Argentina (ARGENCON), San Nicolás de los Arroyos, Argentina, 18–20 September 2024; pp. 1–8. [Google Scholar] [CrossRef]
  86. Costa, G.; Russo, E.; Armando, A. Automating the Generation of Cyber Range Virtual Scenarios with VSDL. JOWUA 2022, 13, 61–80. [Google Scholar] [CrossRef]
  87. Jaduš, B. Web Interface for Adaptive Training at the KYPO Cyber Range Platform. Master’s Thesis, Masaryk University, Brno, Czech, 2021. [Google Scholar]
  88. Mahmoud, R.V.; Anagnostopoulos, M.; Pedersen, J.M. Detecting Cyber Attacks through Measurements: Learnings from a Cyber Range. IEEE Instrum. Meas. Mag. 2022, 25, 31–36. [Google Scholar] [CrossRef]
  89. Oikonomou, N.; Mengidis, N.; Spanopoulos-Karalexidis, M.; Voulgaridis, A.; Merialdo, M.; Raisr, I.; Hanson, K.; de La Vallee, P.; Tsikrika, T.; Vrochidis, S.; et al. ECHO Federated Cyber Range: Towards Next-Generation Scalable Cyber Ranges. In Proceedings of the 2021 IEEE International Conference on Cyber Security and Resilience (CSR), Rhodes, Greece, 26–28 July 2021; pp. 403–408. [Google Scholar] [CrossRef]
  90. Russo, E.; Costa, G.; Armando, A. Building next generation Cyber Ranges with CRACK. Comput. Secur. 2020, 95, 101837. [Google Scholar] [CrossRef]
  91. Jiang, H.; Choi, T.; Ko, R.K.L. Pandora: A Cyber Range Environment for the Safe Testing and Deployment of Autonomous Cyber Attack Tools. In Proceedings of the Security in Computing and Communications, Online, 14–17 October 2020; Thampi, S.M., Wang, G., Rawat, D.B., Ko, R., Fan, C.I., Eds.; Springer: Singapore, 2021; pp. 1–20. [Google Scholar] [CrossRef]
  92. Virág, C.; Čegan, J.; Lieskovan, T.; Merialdo, M. The Current State of The Art and Future of European Cyber Range Ecosystem. In Proceedings of the 2021 IEEE International Conference on Cyber Security and Resilience (CSR), Rhodes, Greece, 26–28 July 2021; pp. 390–395. [Google Scholar] [CrossRef]
  93. He, Y.; Yan, L.; Liu, J.; Bai, D.; Chen, Z.; Yu, X.; Gao, D.; Zhu, J. Design of Information System Cyber Security Range Test System for Power Industry. In Proceedings of the 2019 IEEE Innovative Smart Grid Technologies—Asia (ISGT Asia), Chengdu, China, 21–24 May 2019; pp. 1024–1028. [Google Scholar] [CrossRef]
  94. Capone, D.; Caturano, F.; Delicato, A.; Perrone, G.; Romano, S.P. Dockerized Android: A container-based platform to build mobile Android scenarios for Cyber Ranges. In Proceedings of the 2022 International Conference on Electrical, Computer and Energy Technologies (ICECET), Prague, Czech Republic, 20–22 July 2022; pp. 1–9. [Google Scholar] [CrossRef]
  95. Grimaldi, A.; Ribiollet, J.; Nespoli, P.; Garcia-Alfaro, J. Toward Next-Generation Cyber Range: A Comparative Study of Training Platforms. In Proceedings of the Computer Security. ESORICS 2023 International Workshops, The Hague, The Netherlands, 25–29 September 2023; Katsikas, S., Abie, H., Ranise, S., Verderame, L., Cambiaso, E., Ugarelli, R., Praça, I., Li, W., Meng, W., Furnell, S., et al., Eds.; Springer: Cham, Switzerland, 2024; pp. 271–290. [Google Scholar] [CrossRef]
  96. Farhat, H. Design and Development of the Back-End Software Architecture for a Hybrid Cyber Range. Master’s Thesis, Politecnico di Torino, Torino, Italy, 2021. [Google Scholar]
  97. Beuran, R.; Zhang, Z.; Tan, Y. AWS EC2 Public Cloud Cyber Range Deployment. In Proceedings of the 2022 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Genoa, Italy, 6–10 June 2022; pp. 433–441. [Google Scholar] [CrossRef]
  98. Fu, Y.; Han, W.; Yuan, D. Orchestrating Heterogeneous Cyber-range Event Chains With Serverless-container Workflow. In Proceedings of the 2022 30th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), Nice, France, 18–20 October 2022; pp. 97–104. [Google Scholar] [CrossRef]
  99. Grigoriadis, A.; Darra, E.; Kavallieros, D.; Chaskos, E.; Kolokotronis, N.; Bellekens, X. Cyber Ranges: The New Training Era in the Cybersecurity and Digital Forensics World. In Technology Development for Security Practitioners; Akhgar, B., Kavallieros, D., Sdongos, E., Eds.; Springer International Publishing: Cham, Swizerland, 2021; pp. 97–117. [Google Scholar] [CrossRef]
  100. Lateș, I.; Boja, C. Cyber Range Technology Stack Review. In Proceedings of the Education, Research and Business Technologies, Bucharest, Romania, 26–27 May 2022; Ciurea, C., Pocatilu, P., Filip, F.G., Eds.; Springer: Singapore, 2023; pp. 25–40. [Google Scholar] [CrossRef]
  101. Chng, B.; Ng, B.; Roomi, M.M.; Mashima, D.; Lou, X. CRaaS: Cloud-based Smart Grid Cyber Range for Scalable Cybersecurity Experiments and Training. In Proceedings of the 2024 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), Oslo, Norway, 17–20 September 2024; pp. 333–339. [Google Scholar] [CrossRef]
  102. Dracotus, X. Automated Cloud Cyber Range Deployments. Master’s Thesis, University of Aegean, Department of Information and Communication Systems Engineering, Mitilini, Greece, 2021. [Google Scholar]
  103. Seem, J.A. Towards a Scenario Ontology for the Norwegian Cyber Range. Master’s Thesis, NTNU, Trondheim, Norway, 2020. [Google Scholar]
Figure 1. The number of research papers published on cyber ranges between 2014 and 2024, visualizing the increase in the number of papers over time. The boolean expression [“cyber range” OR “cyberrange” OR “cyber-range”] was last conducted on DTU Findit (https://findit.dtu.dk) and Google Scholar (https://scholar.google.com) on 16 November 2024 The x-axis shows the publication years and the y-axis indicates the total number of papers identified each year.
Figure 1. The number of research papers published on cyber ranges between 2014 and 2024, visualizing the increase in the number of papers over time. The boolean expression [“cyber range” OR “cyberrange” OR “cyber-range”] was last conducted on DTU Findit (https://findit.dtu.dk) and Google Scholar (https://scholar.google.com) on 16 November 2024 The x-axis shows the publication years and the y-axis indicates the total number of papers identified each year.
Futureinternet 17 00259 g001
Figure 2. Flow diagram for the PRISMA methodology (Step 1).
Figure 2. Flow diagram for the PRISMA methodology (Step 1).
Futureinternet 17 00259 g002
Figure 3. Flow diagram for the PRISMA methodology (Step 2).
Figure 3. Flow diagram for the PRISMA methodology (Step 2).
Futureinternet 17 00259 g003
Figure 4. Baseline taxonomy at the end of Phase I. Technology dimension was intentionally left without characteristics at this phase; Econometrics was kept to be re-evaluated in the next phase.
Figure 4. Baseline taxonomy at the end of Phase I. Technology dimension was intentionally left without characteristics at this phase; Econometrics was kept to be re-evaluated in the next phase.
Futureinternet 17 00259 g004
Figure 5. Proposed taxonomy (Phase III) with the seven retained dimensions: Scenario, Environment, Teaming, Learning, Monitoring, Management, and Technology, together with every characteristic and sub–characteristic consolidated in Phase III.
Figure 5. Proposed taxonomy (Phase III) with the seven retained dimensions: Scenario, Environment, Teaming, Learning, Monitoring, Management, and Technology, together with every characteristic and sub–characteristic consolidated in Phase III.
Futureinternet 17 00259 g005
Figure 6. Interactive tree visualization of the taxonomy. Each node is color-coded based on its category to improve readability and navigation.
Figure 6. Interactive tree visualization of the taxonomy. Each node is color-coded based on its category to improve readability and navigation.
Futureinternet 17 00259 g006
Table 1. Inclusion and exclusion criteria applied during Phase I and Phase II + III screening.
Table 1. Inclusion and exclusion criteria applied during Phase I and Phase II + III screening.
PhaseCriterionDescription
Phase IIC1Papers on cyber ranges (CRs) that conducted a systematic literature review (SLR)
IC2Papers on CR that proposed or analyzed a taxonomy
EC1Publications written in languages other than English
EC2Papers published before 2014
EC3Papers inaccessible due to a paywall
EC4Opinion pieces, editorials, and other non-academic articles
Phase II + IIIIC1Papers on CR architectures, scenarios, functions, and tools
EC1Publications written in languages other than English
EC2Papers published before 2019
EC3Papers inaccessible due to a paywall
EC4Papers that discussed testbeds outside the CR context
EC5Opinion pieces, editorials, and other non-academic articles
Table 2. Comparative summary of dimensions across existing cyber range taxonomies.
Table 2. Comparative summary of dimensions across existing cyber range taxonomies.
DimensionYamin et al. [11]Ukwandu et al. [12]Knüpfer et al. [30]Russo et al. [13]Glas et al. [31]
ScenarioIncludedIncludedUnder Training EnvironmentUnder Training EnvironmentIncluded
MonitoringIncludedIncludedNot explicitly specifiedUnder ManagementIncluded
LearningIncludedIncludedUnder Audience (Proficiency Level)Not explicitly specifiedIncluded
ManagementIncludedIncludedNot explicitly specifiedIncludedIncluded
TeamingIncludedIncludedNot explicitly specifiedNot explicitly specifiedIncluded implicitly
EnvironmentIncludedIncludedUnder Training EnvironmentUnder Training Environment (Mobile-specific)Included implicitly
TechnologyNot included explicitlyNot included explicitlyNot explicitly specifiedUnder Mobile-specific TechnologyExplicitly included
TypeNot includedIncludedNot explicitly specifiedNot explicitly specifiedIncluded
EconometricsNot includedIncludedNot explicitly specifiedNot explicitly specifiedNot explicitly specified
RecoveryNot includedIncludedNot explicitly specifiedNot explicitly specifiedIncluded
AudienceNot explicitly specifiedNot explicitly specifiedExplicitly includedNot explicitly specifiedNot explicitly specified
TestbedsIntegrated within EnvironmentExplicitly includedNot explicitly specifiedExplicitly included (Mobile-specific)Explicitly included
Table 3. Overview of reviewed studies on cyber ranges.
Table 3. Overview of reviewed studies on cyber ranges.
Authors/TitleFocus/Contribution
Chouliaras et al. [2] (2021)
Cyber Ranges and TestBeds for Education, Training, and Research
Explored rising demand for cybersecurity professionals; surveyed ten cyber ranges (2011–2021) to examine design, implementation, and operations.
Bergan and Ruud [32] (2021)
Otnetic: A Cyber-Range Training Platform Developed for the Norwegian Energy Sector
SLR reinforced exercise design and gamification practices for IT/OT staff in the energy sector; collaboration with NC-Spectrum.
Pavlova [33] (2021)
Implementation of Federated Cyber Ranges in Bulgarian Universities: Challenges, Requirements, and Opportunities
Assessed EU regulatory context; proposed how federated ranges integrate into academic curricula.
Oruc et al. [34] (2022)
Towards a Cyber-Physical Range for the Integrated Navigation System (INS)
Marine-system focus: evaluated cyber–physical testbeds for navigation hardware and emerging security solutions.
Ear et al. [35] (2022)
Towards Automated Cyber Range Design: Characterising and Matching Demands to Supplies
Analyzed 45 architectures; introduced a three-dimension requirements framework (Purpose, Scope, Constraints) to match designs to organizational needs.
Evans and Purdy [36] (2022)
Architectural Development of a Cyber-Physical Manufacturing Range
Concluded that manufacturing contexts need dedicated cyber–physical ranges; contrasted with broader taxonomy of [11].
Päijänen [37] (2023)
Pre-Assessing Cyber-Range-Based Event Participants’ Needs and Objectives
Showed that pre-assessment of participant goals improves exercise design and learning outcomes.
Bistene et al. [38] (2023)
Modelling Network Traffic Generators for Cyber Ranges: A Systematic Literature Review
Classified traffic generators (model, trace, hybrid) and critiqued validation practices across 30 studies.
Shamaya and Tarcheh [39] (2023)
Strengthening Cyber Defence: A Comparative Study of Smart-Home Infrastructure for Penetration Testing and National Cyber Ranges
Built an IoT penetration-testing environment and conducted an SLR on national cyber range infrastructures; offered building guidance.
Boström and Hylander [40] (2023)
Selecting Better Attacks for Cyber Defence Exercises
Conducted SLR plus expert interviews; proposed a five-step process (selection, implementation, automation, deployment, evaluation) for realistic attack scenarios.
Stamatopoulos et al. [41] (2023)
Exploring the Architectural Composition of Cyber Ranges: A Systematic Review
Focused on cyber–physical range architectures; identified design challenges and research gaps.
Table 4. Characteristics of Scenario dimension.
Table 4. Characteristics of Scenario dimension.
CharacteristicSub-CharacteristicDescriptionRefs
Storyline-Sequence of incidents (e.g., data disclosure, DDoS, system manipulation) that structure the exercise narrative.[12,53]
PurposeCollaboration, Awareness, Skill DevelopmentHigh-level intent of the scenario; collaboration is rarely mentioned, while awareness and skill development are common.Own analysis
Scenario TypeDynamic, Static, Security Testing, Experimentation, EducationTypology of scenarios based on structure and objectives.Own analysis
Tools-Collection of hardware and software components used to simulate real-world environments.[54] [11,13]
Domain-Technological scope of the scenario, like IoT, SCADA, Forensics, Web Security[55]
Sector-Industry context in which the scenario is applied, for example, Energy, Maritime, Defence[56,57,58]
Gamification-Competitive and game-like elements embedded into scenario design.[59,60,61]
Table 5. Characteristics of Environment dimension.
Table 5. Characteristics of Environment dimension.
CharacteristicSub-CharacteristicDescriptionRefs
Environment TypeSimulationA simplified representation of a system designed to mimic general behaviors.[22]
Environment TypeEmulationA more detailed replication including real software or hardware to reflect operational systems.[4,22]
Environment TypeHybridA combined setup using both simulated and emulated components.[22]
Environment TypePhysicalA tangible and often lab-based infrastructure used to mimic real-world systems.[20,23,24]
Environment TypeHardwareUses physical devices such as routers and switches for realism.[25]
Environment TypeFederationLinks multiple cyber ranges to create distributed or large-scale testbeds.[21]
Generation ToolsTraffic GenerationProduces background or simulated user activity (e.g., HTTP requests, clicks, logs).[14,15,16]
Generation ToolsAttack GenerationAutomates or simulates known real-world attacks such as Stuxnet or Havex.[17,18]
Generation ToolsUser EmulationSimulates typical user activity to produce background noise or interaction patterns.[19,20,21]
Table 6. Characteristics of Econometrics dimension.
Table 6. Characteristics of Econometrics dimension.
CharacteristicSub-CharacteristicDescriptionRefs
Threat ForecastingPredicting the likelihood, timing, and financial consequences of future cyber-attacks.[12,62]
Choices and ImplicationsEvaluating cost–benefit trade-offs, such as investing in cyber defence tools versus accepting residual risk.[12,62]
Mitigation ImpactAnalyzing the cost-efficiency of different defensive strategies under various threat conditions.[12,62]
Assessment ModelsCombining data from risk assessments, threat forecasts, and incidents to justify investment or gauge resilience.[12,62]
Table 7. Characteristics of Teaming dimension.
Table 7. Characteristics of Teaming dimension.
CharacteristicSub-CharacteristicDescriptionRefs
Team TypeRed TeamOffensive actors who simulate attacks. Can be human participants or system-generated.[11,12,13,30]
Team TypeBlue TeamDefensive participants—typically the main trainees—focused on detecting and mitigating attacks.[11,12,13,30]
Team TypeYellow TeamRoles that increase scenario realism (e.g., simulated users, IT staff). May be played or emulated.[11]
Team TypeWhite TeamExercise facilitators, observers, and evaluators responsible for control and oversight.[11]
Team TypeGreen TeamTechnical support team maintaining infrastructure during the exercise.[11]
Team TypePurple TeamIntegrates Red and Blue team insights to improve tactical and strategic collaboration.[11]
Agent TypeAI-Based AgentsMachine learning-driven systems that dynamically plan, adapt, or execute scenarios.[63]
Agent TypeAutonomous AgentsRule- or script-based agents that simulate behavior using deterministic logic (e.g., APT generators).[64]
Table 8. Characteristics of Learning dimension.
Table 8. Characteristics of Learning dimension.
CharacteristicSub-CharacteristicDescriptionRefs
After-Action AnalysisStructured post-exercise review of participant activity, often tied to debriefing and performance evaluation.[11,51,54,65]
TutoringEducational support including documents, interactive hints, and training portals to scaffold participant learning.[66]
ScoringType, MethodPerformance evaluation based on impacts to Confidentiality, Integrity, and Availability. May include live or dynamic scoring dashboards. Scoring systems often implied but rarely defined. Existing models include penalty-based and mathematical evaluations. Represents a gap in the current literature.[57,60,67]
Table 9. Characteristics of Monitoring dimension.
Table 9. Characteristics of Monitoring dimension.
CharacteristicSub-CharacteristicDescriptionRefs
MethodDescribes how systems or participants are monitored, including event-driven alerts, system resource tracking, and file system changes. Event-based, CPU, Filesystem[68,69]
DashboardInterfaces used to visualize metrics like attack indicators, participant scores, and resource usage. Kibana, Custom UI.[31,70]
LayerSpecifies the layer of the system or user activity being monitored, from low-level infrastructure to social/behavioral layers, fx. Network, Application, Hardware, Social.[11]
PerformanceIncludes monitoring of CPU/memory performance and user progress or behaviors within the scenario.[71,72]
ProgressActions, Inputs, Path, Teams, UsersTracks participants’ journey through exercises, including their discrete actions, inputs, decision paths, team dynamics, and scoring.[59,73]
ToolsFrameworks such as OpenStack’s dashboard and the ELK stack to custom-built solutions. Some implementations rely on bespoke components designed to capture data from complex system interactions.[74]
Table 10. Characteristics of Management dimension.
Table 10. Characteristics of Management dimension.
CharacteristicSub-CharacteristicDescriptionRefs
ResourceData StorageStoring scenario models, logs, and backups.[35,76]
ResourceMetadataConfiguration details about environments and exercises.[77,78]
Lifecycle ManagementCreationBuilding or cloning infrastructures, including digital twins.[79]
Lifecycle ManagementEditingModifying existing scenarios, though less commonly detailed in the literature.[80]
Lifecycle ManagementDeploymentAutomating the rollout of scenarios or network nodes.[12,81]
Lifecycle ManagementExecutionStarting, pausing, and stopping exercises.[82]
Lifecycle ManagementRecoveryCleaning up or resetting systems post-exercise.[12,83]
Lifecycle ManagementGenerationOrchestrating attacks, traffic, and events. Emphasizes when and how these are triggered.[11]
Remote AccessEnables administrative or authorized user control of the range infrastructure remotely.[11,12,84]
RoleDefines user types and permissions (e.g., Instructor, Student, Admin).[57,81,85,86]
Management InterfaceDashboards and portals used to manage and monitor the range.[87,88]
Command and ControlLow-level engines or interfaces that allow direct command execution within the range.[11,75]
Table 11. Characteristics of Technology dimension.
Table 11. Characteristics of Technology dimension.
CharacteristicSub-CharacteristicDescriptionRefs
Connection TechnologiesSecure data links between cyber range components (e.g., VPN tunnels, encrypted TCP/IP stacks, peer-to-peer meshes).[92]
Virtualization LayerVirtual MachinesHypervisor-based VMs that isolate multiple OS instances on shared hardware.[93]
ContainerizationOS-level virtualization such as Docker or LXC enabling rapid, lightweight instantiation.[94]
Cloud VirtualizationPublic, private, or hybrid IaaS provisioning on platforms like AWS or Azure; includes SaaS access to pre-built ranges.[93,94]
System ArchitectureMonolithicAll services deployed as a single unit—suitable for small deployments but less scalable.[95]
MicroserviceDisaggregated services offering elasticity at the cost of orchestration overhead.[95]
FederationLinking geo-distributed ranges via VPN or SD-WAN for joint exercises.[92]
Artificial IntelligenceML/DLMachine and deep learning pipelines for threat detection and digital twin generation.[96]
Large Language ModelsEmerging use of LLMs for log summarization and scenario narration.[62]
ToolsSecurityOffensive/defensive toolsets such as IDS (Snort), DPI modules, and honeypots.[97]
MonitoringTelemetry and scoring frameworks (Zeek, ELK, Grafana) providing runtime visibility.[94]
OrchestrationInfrastructure-as-Code and lifecycle-automation frameworks (TOSCA, CRACK, Kubernetes).[98]
DatabasePersistent or in-memory stores for scenario artifacts and telemetry (relational, document, vector, Redis).[99]
Table 13. Example structure of the Literature sheet.
Table 13. Example structure of the Literature sheet.
IDTitleAuthorDateURL
1Reference Title 1Author A2023-01-01https://example.com/1
2Reference Title 2Author B2023-02-15https://example.com/2
3Reference Title 3Author C2023-03-20https://example.com/3
Table 14. Example structure of the Categories sheet.
Table 14. Example structure of the Categories sheet.
LevelNameDescriptionIDPath
0Root CategoryDescription of root category1, 2, 3Root Category
1Subcategory ADescription of subcategory4, 5, 6Root → Subcategory A
2Subcategory BDetailed description7, 8, 9Root → Subcategory A → Subcategory B
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lillemets, P.; Bashir Jawad, N.; Kashi, J.; Sabah, A.; Dragoni, N. A Systematic Review of Cyber Range Taxonomies: Trends, Gaps, and a Proposed Taxonomy. Future Internet 2025, 17, 259. https://doi.org/10.3390/fi17060259

AMA Style

Lillemets P, Bashir Jawad N, Kashi J, Sabah A, Dragoni N. A Systematic Review of Cyber Range Taxonomies: Trends, Gaps, and a Proposed Taxonomy. Future Internet. 2025; 17(6):259. https://doi.org/10.3390/fi17060259

Chicago/Turabian Style

Lillemets, Pilleriin, Nabaa Bashir Jawad, Joseph Kashi, Ahmad Sabah, and Nicola Dragoni. 2025. "A Systematic Review of Cyber Range Taxonomies: Trends, Gaps, and a Proposed Taxonomy" Future Internet 17, no. 6: 259. https://doi.org/10.3390/fi17060259

APA Style

Lillemets, P., Bashir Jawad, N., Kashi, J., Sabah, A., & Dragoni, N. (2025). A Systematic Review of Cyber Range Taxonomies: Trends, Gaps, and a Proposed Taxonomy. Future Internet, 17(6), 259. https://doi.org/10.3390/fi17060259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop