Next Article in Journal
Theoretical and Experimental Investigation of Vehicle-Mounted Transient Electromagnetic Method Detection for Internal Defects of Operational Tunnels
Next Article in Special Issue
The Human Performance Impact on OEE in the Adoption of New Production Technologies
Previous Article in Journal
Reduced Dimensionality Multiphysics Model for Efficient VCSEL Optimization
Previous Article in Special Issue
Accessibility in Native Mobile Applications for Users with Disabilities: A Scoping Review
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Respite for SMEs: A Systematic Review of Socio-Technical Cybersecurity Metrics

Max van Haastrecht
Bilge Yigit Ozkan
Matthieu Brinkhuis
1 and
Marco Spruit
Department of Information and Computing Sciences, Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands
Department of Public Health and Primary Care, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333 ZA Leiden, The Netherlands
Leiden Institute of Advanced Computer Science (LIACS), Leiden University, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(15), 6909;
Submission received: 6 July 2021 / Revised: 22 July 2021 / Accepted: 24 July 2021 / Published: 27 July 2021
(This article belongs to the Collection Human Factors in the Digital Society)



Featured Application

The results of this work will be incorporated in an application for SMEs in Europe, which aims to improve cybersecurity awareness and resilience, as part of the EU Horizon 2020 GEIGER project.


Cybersecurity threats are on the rise, and small- and medium-sized enterprises (SMEs) struggle to cope with these developments. To combat threats, SMEs must first be willing and able to assess their cybersecurity posture. Cybersecurity risk assessment, generally performed with the help of metrics, provides the basis for an adequate defense. Significant challenges remain, however, especially in the complex socio-technical setting of SMEs. Seemingly basic questions, such as how to aggregate metrics and ensure solution adaptability, are still open to debate. Aggregation and adaptability are vital topics to SMEs, as they require the assimilation of metrics into an actionable advice adapted to their situation and needs. To address these issues, we systematically review socio-technical cybersecurity metric research in this paper. We analyse aggregation and adaptability considerations and investigate how current findings apply to the SME situation. To ensure that we provide valuable insights to researchers and practitioners, we integrate our results in a novel socio-technical cybersecurity framework geared towards the needs of SMEs. Our framework allowed us to determine a glaring need for intuitive, threat-based cybersecurity risk assessment approaches for the least digitally mature SMEs. In the future, we hope our framework will help to offer SMEs some deserved respite by guiding the design of suitable cybersecurity assessment solutions.

1. Introduction

In recent times, we have seen a surge in cyber threats that businesses are struggling to cope with [1]. Additionally, the frequency with which cybersecurity incidents occur, and the costs associated with them, are on the rise [2]. Among businesses, small- and medium-sized enterprises (SMEs) are most vulnerable, due to a shortage of cybersecurity knowledge and resources [3]. The vulnerable position of SMEs is being exploited, as witnessed by the large proportion of SMEs that experience cyber incidents [4].
In SME cybersecurity, the interplay between the social and the technical is essential [5], which is why SMEs are often studied from a socio-technical systems (STS) perspective [6]. The view of STS is that joint consideration of social and technical elements is necessary [7]. This view has interesting implications in cybersecurity, where humans are generally found to be the weakest link [8,9].
Due to their lack of resources [3] and the complex socio-technical setting they operate in, SMEs struggle to address their cybersecurity issues autonomously [10]. Before SMEs can begin to improve their cybersecurity posture, it is vital they first assess their current situation [11]. Assessment of cybersecurity posture is achieved by measuring SME cybersecurity properties, which result in cybersecurity metrics. Regardless of whether measurement results are deemed relevant by the SME, the knowledge gained by those involved in the measurement process is of value [12]. This observation touches once more on the socio-technical nature of the problem, where furthering human knowledge and improving the technical cybersecurity posture of an SME go hand-in-hand.
Cybersecurity assessment generally requires the aid of cybersecurity experts—personnel that SMEs typically do not have [9,10]. A solution to this issue is to automate the cybersecurity assessment process where possible [9]. Although automation is a promising approach, the diverse nature of the SME landscape is often ignored [13,14], whereas we know from earlier research that it is vital for SMEs to have solutions adapted to their context and needs [15,16].
Another issue is that cybersecurity assessment approaches aimed at SMEs are still scarce [6], explaining why it is not uncommon to see results from other cybersecurity focus areas being applied to the SME setting [10]. Systematic literature reviews are a logical approach to gather knowledge from one focus area, summarise it, and make it available for use in other focus areas.
Systematic reviews that address both the social and technical sides of cybersecurity already exist [17,18]. These reviews identified a need for adaptable solutions [18], which we have seen are also craved by SMEs. Additionally, these papers stress the need for more clarity on how to aggregate security metrics [17,18]. Given the lack of resources available at SMEs, aggregating information into understandable insights is a requirement for a usable solution [9].
The issue with these systematic reviews is that they offer adaptability and aggregation as areas for future research, rather than addressing the topics head-on. Additionally, they do not provide actionable insights for SMEs since this is not their target audience.
In short, we can conclude that SMEs need (semi-)automated cybersecurity assessment approaches that address their needs for adaptability and aggregation of information. A systematic review offers the potential to gather and summarise such information, providing guidelines for designing usable solutions for SMEs. This motivates the need for a systematic review of cybersecurity metric research, where both the social and technical sides of the puzzle are acknowledged. This is exactly our aim in this paper, as we try to answer the following research questions:
  • RQ1: How are cybersecurity metrics aggregated in socio-technical cybersecurity measurement solutions?
  • RQ2: How do aggregation strategies differ in cybersecurity measurement solutions relevant to SMEs and all other solutions?
    RQ2.1: What are the reasons for these differences?
    RQ2.2: Which aggregation strategies can be used in SME cybersecurity measurement solutions, but currently are not?
  • RQ3: How do cybersecurity measurement solutions deal with the need for adaptability?
In Section 2, we cover related work from several different perspectives to provide a basis for our systematic review. Our systematic review methodology is detailed in Section 3, after which we present our results in Section 4.
To ensure that the insights we gain on aggregation and adaptability are captured in an actionable form, we incorporate them in a novel socio-technical cybersecurity framework geared towards SME needs. Our framework, introduced in Section 5, integrates our systematic review results with existing knowledge to arrive at concise guidelines for what can be expected of various SME categories.
Section 6 focuses on outlining the answers to our research questions, as well as covering limitations and threats to validity. Finally, we conclude in Section 7, additionally outlining potentially fruitful areas for future research.

2. Related Work

Before covering work relating to our socio-technical cybersecurity metric setting, we should be clear on our definition of what constitutes a cybersecurity metric. We make use of the definition of a cyber-system as specified in Refsdal et al. [19]: “A cyber-system is a system that makes use of a cyberspace”. Refsdal et al. [19] define cyberspace as “a collection of interconnected computerized networks, including services, computer systems, embedded processors, and controllers, as well as information in storage or transit”. There is no standard definition of what constitutes a (cyber)security metric [17]. Borrowing ingredients from earlier definitions, we define a cybersecurity metric to be: any value resulting from the measurement of security-related properties of a cyber-system [17,19,20].

2.1. Socio-Technical Cybersecurity

Humans are often considered the weakest link in cybersecurity [21]. It is vital to recognise the interaction of the social and technical sides of cyber-systems when modelling and measuring cybersecurity, which is why the field of STS has played such an important role in cybersecurity metric research [22]. STS research has uncovered the dangers of considering social and technical elements separately [23] and has offered insight into how to avoid these dangers [7].
Recognition of the human factor in cybersecurity goes beyond simply including static human actors. This is where behavioural theories such as protection motivation theory (PMT) and self-determination theory (SDT) come in [24,25]. PMT reserves a prominent role for extrinsic motivators and threat appraisal [26]. SDT includes extrinsic motivation as a central concept but often focuses on moving from extrinsic to increasingly internalised motivation [24]. In the context of SMEs, intrinsic motivation to improve cybersecurity is often hard to find. However, there are solutions to this problem. Committing to improving cybersecurity in an organisation can motivate employees [24]. From the STS perspective, it is common to distinguish between metrics that include the real-life threat environment and those that do not [22]. Threat perception lies at the core of PMT and is important in security applications using SDT [25]. Another solution to promote motivation among SME employees would therefore be to incorporate the real-life threat environment in our cybersecurity metrics. Later in this paper, in Section 4, we describe whether this is indeed something we observe in current research.
We will address the social dimension using the ADKAR model of Hiatt [27]. This model, originating from change management, considers five phases in managing the personal side of change: awareness, desire, knowledge, ability, and reinforcement. ADKAR has previously been applied in assessing information security culture within organisations [28]. We apply ADKAR as a means to classify the socio-technical cybersecurity metrics we encounter. We define a socio-technical cybersecurity metric to be a cybersecurity metric that requires measuring the outcome(s) of the actions of at least one (simulated) human actor. We do not address the technical dimension explicitly in this definition, as the technical dimension is implicit in the term “cybersecurity”. We hypothesise that all socio-technical cybersecurity metrics can be linked to one or more of the ADKAR categories.

2.2. Cybersecurity Metric Reviews

Systematic reviews are common in cybersecurity metric research. However, as Table 1 shows, they are often narrow in scope. Either the focus area is narrow, or the research does not consider social factors. The papers that do cover both social and technical factors often do so passingly, and without covering the intricacies and implications of socio-technical interactions.
Some exceptions are comprehensive and cover both social and technical factors [17,18]. Interestingly, exactly these papers outline that future research should focus on “how to aggregate and to what extent to aggregate” [17]. Additionally, they stress the importance of adaptability, meaning by this “the state of being able to change to work or fit better” [18]. This need for adaptability has been confirmed by experience from practice [46].
We address the acknowledged challenges of aggregation and adaptability head-on in our systematic review, ensuring that our approach is both distinct from earlier work and provides a meaningful contribution to the field. Furthermore, we employ a novel systematic review approach (as outlined in Section 3) and target our analysis to aid SMEs, a group with specific needs often not considered in earlier work.

2.3. Aggregation

In cybersecurity metric research, aggregation strategies vary, although the importance of proper aggregation is widely recognised [17,18]. To discuss different aggregation strategies, we define a mathematical context with an aggregation strategy S : R 0 n R 0 , where R 0 is the set of non-negative real numbers. We define metric value variables x i , corresponding to metrics i = 1 , , n . The metric values are assumed to be non-negative: x i R 0 i . We assume that for each metric, a higher metric value corresponds to lower security, without loss of generality. A negative relationship between a metric and security is common in the security literature, as it is often the lack of security, or risk, which is being measured.
A desirable property of a strategy S is that it is responsive to changes in metric values. This is captured by the property of injectivity, where we consider a strategy S to be injective when for a , b R 0 , a b , S ( a , x 1 , x 2 , , x n ) S ( b , x 1 , x 2 , , x n ) . Injectivity implies that a change in a metric value will always result in a change of the aggregate, provided all else remains constant. A stronger requirement would be strict monotonicity of the strategy S. Although this property could be desirable in the cybersecurity context, we only consider the less strict injectivity in this paper.
A common property of averages, which constitute a specific branch of aggregation, is idempotence. A strategy S is idempotent, when for a R 0 , S ( a , a , , a ) = a . When an aggregation strategy S is both injective and idempotent, the result of the aggregation always lies between the minimum and the maximum values of all metrics. Both injectivity and idempotence capture what we would intuitively expect of an aggregation strategy, as these are properties satisfied by the Pythagorean means. In this sense, these are desirable properties in the context of SMEs, where cybersecurity knowledge is often lacking. To still allow employees to feel competence and relatedness [25] in the complex cybersecurity setting, we should at least use an aggregation strategy they understand.
Three additional properties are important in the security context. The possibility to prioritise certain metrics over others is desirable [47]. Formally, we consider a strategy to allow for prioritisation when for any a , b > 0 , a b , there exists a pair i , j with i j , such that S ( x 1 , , x i = a , , x j = b , , x n ) S ( x 1 , , x i = b , , x j = a , , x n ) .
Strategies should also be able to accommodate dependencies between security metrics. However, it is complicated to include metric dependencies, with some seeing it as “the most challenging task” in aggregation [18]. For strategies in the set D of strategies that satisfy the necessary differentiability properties, we define a strategy S to allow for dependencies, when there exist distinct metrics i, j, and k such that:
2 S x i x j 2 S x i x k .
Equation 1 captures the idea that a strategy S allows for dependencies among metrics when it allows for relationships among metrics that are not proportional to other relationships. For aggregation strategies S D , we employ the same verbal definition. Care should be taken to adjust the criterion of Equation 1 appropriately where it cannot be applied directly for the strategy S.
A last core principle in security is that systems are only as secure as their weakest link [48]. Assuming that we have at least two distinct values among our metrics, there exists a minimum value x m i n and a maximum value x m a x . Since we assume metrics relate negatively to security, x m a x corresponds to the weakest link. A strategy S satisfies the weakest link principle if for any a > 0 , S ( x m i n + a , , x m a x ) S ( x m i n , , x m a x + a ) , and there exists an α > 0 , such that S ( x m i n + α , , x m a x ) < S ( x m i n , , x m a x + α ) . Thus, weakening the weakest link has more impact than weakening the strongest link with an equal amount.
The most common aggregation strategy employed in the literature is the weighted linear combination (WLC), which can be defined as:
S W L C ( x ) = a + i = 1 n w i · x i b , a 0 , b > 0 , w i > 0 i .
WLC contains the special cases of the weighted sum ( a = 0 , b = 1 ), the weighted average ( a = 0 , b = w i ), and the arithmetic mean ( a = 0 , b = n , w i = 1 i ). WLC strategies are injective, idempotent, and allow for prioritisation through weighting. However, these strategies do not allow for dependencies and do not satisfy the weakest link principle.
A related set of strategies are the weighted product (WP) strategies:
S W P ( x ) = a + b · i = 1 n x i w i , a 0 , b > 0 , w i ( 0 , 1 ] i .
Among the WP strategies are the simple product ( a = 0 , b = 1 , w i = 1 i ) and the geometric mean ( a = 0 , b = 1 , w i = 1 n i ). WP strategies satisfy the same properties as WLC strategies, except for the idempotence property, which these strategies do not satisfy.
Using the weighted maximum (WM) - S W M ( x ) = m a x { w 1 · x 1 , , w n · x n } , w i > 0 i metric value as the aggregated value is uncommon in most disciplines, since this strategy is not injective. However, it is used in the security field [49], and is in fact an extreme case of satisfying the weakest link principle. WM allows for prioritisation, although the basic maximum function does not.
The complementary product is another aggregation strategy that is uncommon outside of the security field [49]. Let x ^ i , for i = 1 , 2 , , n , denote the metric value normalised to [ 0 , 1 ) . Let w i be the weight of metric i for i = 1 , 2 , , n . We define the weighted complementary product (WCP) class as:
S W C P ( x ) = a · ( 1 i = 1 n ( 1 x ^ i ) w i ) , a > 0 , w i ( 0 , 1 ] i .
The regular complementary product is achieved with a = 1 and w i = 1 i . WCP strategies are injective and can satisfy the prioritisation and weakest link principles, depending on the values of w i .
None of the strategies considered so far consider dependency. Bayesian networks (BN) are probabilistic graphical models, often of a causal nature, that are commonly applied in the security field [33]. In BN aggregation strategies, the metric values x i are assumed to originate from discrete, bounded random variables X i , corresponding to the metrics i = 1 , , n . The conditional dependencies between the random variables, and with a potential unobserved variable Y, are made explicit. This allows us to infer the probabilities of different values of Y, based on the metric values x i . BN strategies are injective, but not idempotent. Although prioritisation is generally not a goal within these strategies, the prioritisation property will usually be satisfied. BN strategies accommodate dependencies by their nature, but will mostly not satisfy the weakest link principle.
The strategy classes presented in Table 2 are not exhaustive but do cover the large majority of all aggregation strategies employed, as we show in Section 4. Two examples of other possibilities are the use of analytic network process (ANP) techniques [50,51], which relate to the deterministic equivalent of Bayesian networks, and the analysis of game-theoretic equilibria [52]. What is common to all strategies is that none satisfy all criteria of Table 2.

2.4. Adaptability

Adaptability is crucial to any cybersecurity solution [53]. Especially when measuring cybersecurity, a rigid solution that does not adapt to a changing environment or a new use case is far from optimal [54]. It is not surprising to see, then, that adaptability is a key focus of many studies [13,55], although operationalisation of adaptability is still a challenge [53].
We consider adaptability to be “the state of being able to change to work or fit better” [18]. This definition outlines two important dimensions of adaptability. First, a solution is considered adaptable if it can change to work better. There are several reasons why a cybersecurity metric solution may not be functioning as it should. This can relate to problems with the metrics themselves, such as missing or dirty data [56]. It can also relate to a changing security landscape that invalidates an existing model. This phenomenon is known as concept drift [57]. Second, a solution is considered adaptable if it can change to fit better. Generally, cybersecurity solutions in research are made to fit their use case. We can determine their adaptability in the “fitting” dimension by determining how easily the solution can be deployed at other (similar) use cases.
Adaptability is significant in the SME context. The SME landscape is diverse [14], and SMEs often lack the knowledge and expertise to perform extensive adaptations independently [9]. In Section 6, we assimilate observations from earlier research and our results of Section 4 to provide suggestions for improving solution adaptability.

3. Systematic Review Methodology

We performed a systematic literature review to address our research questions. To ensure broad coverage of the cybersecurity metrics field, we employed a novel systematic review methodology blending active learning and snowballing (SYMBALS, [58]), which combines existing methods into a swift and accessible methodology, while following authoritative systematic review guidelines [59,60,61].
Active learning is one of the cornerstones of the SYMBALS approach. Active learning is commonly applied in the title and abstract screening phase of systematic reviews, where researchers start with a large set of papers and prefer not to screen them all manually [62]. Active learning is uniquely suited to this task, as this machine learning method selects the ideal data points for an algorithm to learn from.
SYMBALS complements active learning with backward snowballing. From a set of included papers, a researcher can find additional relevant papers by consulting references (backward snowballing) and citations (forward snowballing) [63]. Snowballing has proven to be a valuable addition to systematic reviews, even when reviews already include an extensive database search [64]. Backward snowballing is especially useful in uncovering older relevant research. Forward snowballing is not employed within SYMBALS, based on the observation that databases generally have excellent coverage of recent peer-reviewed research.
After the development and evaluation of a systematic review protocol for this research, we commenced with the database search step of SYMBALS. We retrieved research from abstract databases (Scopus, Web of Science) and full-text databases (ACM Digital Library, IEEE Xplore, PubMed Central).
The Scopus API was used to retrieve an initial set of relevant research. Results from other sources were then successively added to this set. The order in which sources were consulted can be surmised from Table 3. The Python Scopus API wrapper “pybliometrics” [65] was used to retrieve all research available through the Scopus API that satisfied the query:
  •     AUTHKEY((security ∗ OR cyber ∗)
  •     AND (assess ∗ OR evaluat ∗ OR measur ∗ OR metric ∗ OR model ∗ OR risk ∗ OR scor ∗))
  •     AND LANGUAGE(english) AND DOCTYPE(ar OR bk OR ch OR cp OR cr OR re)
The “AUTHKEY” field corresponds to the keywords that authors provided for a paper. Our search query is intentionally broad, as the SYMBALS methodology allows us to deal with larger quantities of research, and we aim to exclude as little relevant research as possible at this stage. We did choose to only include English language research and document types where extensive and verifiable motivations for findings can be reported.
Table 3 summarises the query results. ACM Digital Library and IEEE Xplore limit the number of accessible papers to 2000. This means only the 2000 most relevant papers from these sources could be considered. Moreover, IEEE Xplore only allows the use of six wildcards in the search query. We removed the “security” and “cyber” wildcards for the IEEE Xplore search to comply with this limitation. Any research without an abstract was excluded, as this is vital to the active learning phase of SYMBALS. This led to a small set of exclusions from the PubMed Central database. Duplicate removal was performed based on the research title, although we found that this process was not perfect, due to different character sets being accepted in different databases.
Altogether, our dataset resulting from database search comprised 25,773 papers. This exemplifies the broad scope of our research, as the largest initial set of papers from the reviews in Table 1 comprised 4818 papers [43].
The set of 25,773 papers is too large to perform data extraction directly. This is where the active learning phase of SYMBALS comes in. We chose to use ASReview in this phase, a tool that offers active learning capabilities for systematic reviews, specifically for the title and abstract screening step [62]. Many other active learning tools exist that are worth considering [66]. However, we found ASReview effective and easy to use, and additionally value the commitment its developers have made to open science. This shows, among other things, in the codebase that they made available open-source.
In the ASReview process, as well as in the later review phases, we made use of the following inclusion and exclusion criteria:
  • Inclusion criteria:
    I1: The research concerns cybersecurity metrics and discusses how these metrics can be used to assess the security of a (hypothetical) cyber-system.
    I2: The research is a review of relevant papers.
  • Exclusion criteria:
    E1: The research does not concern cyber-systems.
    E2: The research does not describe a concrete path towards calculating cybersecurity metrics (only applied if I2 is not applicable).
    E3: The research has been retracted.
    E4: There is a more relevant version of the research that is included.
    E5: The research was automatically excluded due to its assessed irrelevance by the ASReview tool.
    E6: The research does not satisfy the database query criteria on language and document type.
    E7: No full-text version of the research can be obtained.
    E8: The research is of insufficient quality.
    E9: The research does not contain at least one socio-technical cybersecurity metric.
Exclusion criterion E8 relates to the quality assessment phase of SYMBALS, which is explained below. Criterion E9 requires the consideration of the full text to be determined, as abstracts do not contain enough information to make a decision regarding this intricate topic [67]. Thus, neither of these criteria were applied during title and abstract screening.
ASReview requires users to specify prior relevant and irrelevant papers to train its algorithm. The following papers were used as initial indications of relevance to ASReview:
  • Stolfo et al. [68],
  • Noel and Jajodia [69],
  • Spruit and Roeling [70],
  • Allodi and Massacci [71],
  • Cho et al. [18].
These papers were chosen since they cover diverse topics, were written by different authors at different times and were published in different journals and conferences. ASReview additionally provides the option to label a certain number of random papers before proceeding, assuming that a significant proportion of these papers will be irrelevant. This provides the algorithm with a balance of relevant and irrelevant papers for training. We labelled five random papers, giving us a total training set of 10 papers.
The ASReview tool then presents the paper whose classification it deems most informative to learn from. The tool quickly learns to distinguish between relevant and irrelevant papers. By presenting the researcher mostly relevant papers, the process of discovering relevant papers is accelerated.
Although ASReview offers several classifier options, we employed the default Naïve Bayes classifier using term frequency-inverse document frequency (TF-IDF) feature extraction and certainty-based sampling. The default settings have been shown to produce consistently good results and are additionally commonly available in other active learning tools [62]. Thus, our decision to use the default settings can be motivated both from a performance and a reproducibility standpoint.
At some point in the active learning process, mostly irrelevant research remains. To reduce the time spent on assessing irrelevant research, a stopping criterion is used [62]. We stopped evaluating research when the last 20 reviewed papers were considered irrelevant, although more sophisticated stopping criteria exist that are worth considering [72]. All research that was not evaluated at this stage was excluded based on exclusion criterion E5. As Figure 1 shows, 1644 papers remained after the active learning phase.
We then proceeded with the backward snowballing phase of SYMBALS. We followed the ASReview evaluation order in our backward snowballing procedure. We concluded backward snowballing once 10 consecutive papers contained no new references satisfying the inclusion criteria. As can be seen in Figure 1, 1796 papers were contained in our inclusion set after the completion of this phase.
SYMBALS specifies quality assessment as an optional step, but given the large number of papers remaining, assessing quality was deemed necessary. Table 4 outlines the quality criteria that were applied. Commonly used research quality criteria were adapted for use with a Likert scale [73]. Statements could be responded to with strongly disagree, disagree, neutral, agree, or strongly agree. Instead of applying these criteria to all 1796 inclusions, the two researchers involved in quality assessment evaluated 40 papers, with 20 papers being evaluated by both researchers.
A simple, yet effective, solution to extrapolate these results is to train a binary decision tree on basic research characteristics, to create a model that can distinguish research of sufficient quality from research of insufficient quality. The five-point Likert scale responses were assigned scores of 0 (strongly disagree), 0.25 (disagree), 0.5 (neutral), 0.75 (agree), and 1 (strongly agree). Summing the quality criteria scores, each paper received a score between 0 and 9. To make the problem a binary decision problem, we labelled papers with a score of at least 6 as having sufficient quality. The height of this threshold determines how strict the eventual model will be.
Next, we split our set of 60 evaluated papers into a training set of 48 papers (80%) and a test set of 12 papers (20%). To be able to train a model on this set, we need explanatory variables that explain the quality scores obtained by the papers. We opted to use three features: years since publication, citation count, and the number of pages. The maximum depth of the binary decision tree was set to 3, meaning at most three binary splits are performed before classifying a paper as having sufficient or insufficient quality. The model was trained on the 48 training papers and evaluated on the 12 test papers. Despite—or perhaps because of—the model’s simplicity, 11 of the 12 test papers were labelled correctly. The only incorrect labelling occurred in an edge case with a quality score of 6. Similar results were obtained in replications with different random seeds. Figure 1 shows that 516 papers remained after applying the binary decision tree to our complete inclusion set.
Finally, we applied exclusion criterion E9 using a manual screening process, to filter out the papers that do not consider the social side of cybersecurity, as defined in Section 2.1. Figure 1 shows that in total, 60 papers were included after our filtering step.

4. Results

In this section, we focus on descriptive analysis of aggregate results. In Section 5 and Section 6, we will dive deeper, to interpret and contextualise the results. Table A1 in Appendix A lists all data items that were extracted from the included papers to help us address our research questions. Appendix B provides detailed results per inclusion.
Figure 2 depicts the relative prevalence of each of the five ADKAR factors over the years. Since 2010, awareness and reinforcement together constituted over half of the ADKAR considerations. Desire is the element that receives the least attention in research. Table 5 lists the related concepts that we encountered and mapped to each of the ADKAR terms.
Part of the reason for the prevalence of reinforcement research is that cybersecurity training and education belong to this ADKAR element. Researchers feel that organisational reinforcement is an important aspect of the social side of cybersecurity. At the same time, reinforcement can be easier to measure than other factors, which may offer a partial explanation for its prevalence. For example, many researchers choose to include a metric of cybersecurity awareness training (reinforcement), rather than of cybersecurity awareness itself (awareness).
Various security concepts were assessed in our inclusions, as shown in Table 6. Some researchers choose to measure security itself [74,75], but this approach is too general for most. Risk was assessed in two-thirds of all papers. This is interesting, as risk can be seen as having a negative connotation, whereas awareness, maturity, and resilience have positive connotations. This finding conflicts with the general tendency in the security community to favour SDT approaches over the fear- and threat-based approaches more associated with PMT [25], especially in the context of organisations [76].
When analysing the ADKAR factors by assessment concept, the papers assessing security maturity stood out. These papers place a large focus on the organisational reinforcement of security and ignore all other ADKAR factors. This is not a surprising finding. Maturity is generally a concept that requires an assessment of the organisation, rather than the individuals who make up this organisation.
Table 6 shows that most papers stuck to WLC, WP, and WM as aggregation strategies. It is worth pointing out that not aggregating is a reasonable choice. If it is not necessary for a particular context, it should be avoided, based on our conclusion from Table 2 that no aggregation method satisfies all ideal security properties.
Table 7 focuses on the actors that were considered from the social viewpoint. Almost all papers focused solely on the defender. It is interesting to see that the desire and ability factors of ADKAR are much more prominent in research including the attacker. We would expect to see more focus from research on desire, and the related concept of motivation, based on the important role that motivation and internalisation play in SDT and PMT [24]. Desire and motivation are not easily measurable concepts, but metrics such as “attendance at security sessions” can serve as useful proxies here [77].
Nearly all research that considers the attacker perspective considers the real-life threat environment, as specified in Gollmann et al. [22]. In papers covering the defender, it is quite common to ignore threats entirely [78] or to use a proxy such as the prevalence of vulnerabilities to represent threats [79]. This is remarkable given the vital role that threat perception plays in both SDT and PMT [25].
Table 8 groups research based on the employed aggregation strategy. Inclusions were classified into one of three classes: theoretical, implementation, or review. The research was classified as an implementation if either clear and described actions were taken based on the implemented method, or the model was assessed at more than one point in time. This strict requirement explains why most papers were classed as theoretical.
One immediately notices from Table 8 that two of the four implementation papers did not employ an aggregation strategy. As we discussed in Section 2.3 and shown in Table 2, aggregation should only be carried out if deemed necessary. In half of the implementation research of our inclusions, researchers felt the benefits of aggregation did not outweigh the drawbacks.
We additionally see that most research sticks to WLC and WP strategies, which do not satisfy the weakest link principle and cannot take into account dependencies. Researchers prefer simple and explainable strategies, which are injective or idempotent, over strategies that satisfy more security properties. Out of our 60 inclusions, 10 used fuzzy logic approaches. Although translating qualitative statements to fuzzy numbers differentiates these methods from approaches using crisp numbers, most still used some combination of WLC, WP, and WM to aggregate (for example, [80,81,82]).
Exceptions are Lo and Chen [50] and Brožová et al. [51], who used an ANP approach to capture dependencies. Lo and Chen [50], Brožová et al. [51] and the four papers using a bayesian network approach [83,84,85,86] are the only papers that considered dependencies between metrics. Interestingly, all of these papers were published in 2016 or earlier. It is not immediately clear what the underlying reason is for the current drought in research considering dependencies, but it is certainly a research area that deserves more attention.
Table 9 provides detailed results regarding the research application area. Although more enterprise sizes were considered, we only encountered research applicable to medium- and large-sized enterprises, and research applicable to any enterprise size. As with research focused on maturity modelling, we see a strong focus on the reinforcement factor of ADKAR in enterprise research, especially for larger enterprises.
In research intended to apply to any enterprise, Table 9 shows that WLC was by far the most popular aggregation strategy class. The only other strategy class that was used is WM. We believe it is not a coincidence that these are the only aggregation strategy classes that are both injective and idempotent. Strategies with these properties are likely to be more intuitive and easy to understand, as explained in Section 2.3. Therefore, it is not surprising that these strategies are proposed in research addressing all enterprise sizes, since especially smaller businesses need to be motivated through approachable solutions.
Regarding adaptability, of the 56 inclusions that were not review papers, 44 did not make any consideration for missing or dirty data. Of the papers that did consider one or both of these issues, the most common strategy was to ignore the associated problems. Out of these 56 papers, 46 were not able to adapt to a security event occurring, mostly since they did not operate in a live setting, but were formulated as periodic assessments. Even then, most authors did not cover this topic, and it is certainly not always clear how the security assessment would be adapted after an incident.
Concept drift and adaptation to other use cases were also often not considered. Just four of our inclusions explicitly considered concept drift and no paper mentioned a concrete timeline for when a solution should be updated. Adaptation to other use cases was discussed in 24 of our inclusions. However, the majority of these papers only gave a rough outline of how the solution could be adapted. A better practice would be to give concrete guidelines on how to adapt the solution or to immediately analyse several use cases. The former approach was not seen in research, whereas the latter was (for example, [87,88,89,90]).

5. Socio-Technical Cybersecurity Framework for SMEs

To offer more insight into how we can create effective cybersecurity assessment solutions for SMEs, we position our results and findings in the STS analysis framework of Davis et al. [7]. Figure 3 shows the view of STS as consisting of six internal social and technical aspects, within an external environment. We renamed the “Buildings/Infrastructure” aspect of Davis et al. [7] to “Assets”. This ensures that our view is better aligned with standard terminology in cybersecurity literature. Based on the importance of policies in socio-technical cybersecurity frameworks [5], we explicitly included policies in the “Processes/Procedures” aspect of Davis et al. [7] and renamed this aspect to “Processes’.
The socio-technical system we study is the SME, in the context of cybersecurity. However, the complete set of SMEs is too diverse to consider this group as a single collective. This is why the European DIGITAL SME Alliance proposes to use four SME categories, based on the different roles SMEs can play in the digital ecosystem: start-ups, digitally-dependent SMEs, digitally-based SMEs, and digital enablers [14]. The European DIGITAL SME Alliance specifies these categories in the context of cybersecurity standardisation, which is intricately related to our cybersecurity assessment setting, making it a suitable classification.
The European DIGITAL SME Alliance defines start-ups as SMEs where “security has a low priority”. They “typically neglect (or are not aware of) requirements” for running a secure business. Digitally-dependent SMEs are companies that depend on digital solutions (as end users) to run their business. Digitally-based SMEs “highly depend on digital solutions for their business model”, and, finally, digital enablers are SMEs that develop and provide digital solutions [14].
Table 10 introduces our framework, which synthesises the SME categories of the European DIGITAL SME Alliance [14] with the STS aspects of Davis et al. [7]. Each SME category has different cybersecurity goals based on their different roles in the digital ecosystem. In Table 10, the SME categories are ordered from least to most mature regarding cybersecurity. We expect the more mature SME categories to have achieved the goals of less mature SME categories.
Our framework was constructed based on earlier cybersecurity frameworks focusing on SMEs [10,15,92] or STS [5,91,93,94,95]. Interestingly, none of these frameworks focused on both SMEs and STS. To address the singular characteristics of our setting, we additionally incorporated the findings from our systematic review, as well as principles for designing cybersecurity maturity models for SMEs [96], in our framework. Our findings appear most prominently in the “Technology” aspect, explaining why this column of Table 10 contains relatively few references to earlier work.
Our results relating to the various ADKAR dimensions serve as input for the “People” and “Culture” aspects. Start-ups and digitally-dependent SMEs should focus on making their employees aware and providing initial cybersecurity knowledge to inspire desire and motivation. This can be achieved through a culture of organisational commitment to cybersecurity [93,94]. Digitally-based SMEs and digital enablers should progress through the ADKAR phases, with the aid of cybersecurity training, policy, and assessment. Eventually, employees should mutually reinforce each other’s cybersecurity abilities [94]. The ideal cybersecurity culture will lead to trust from both the people inside the SME and the environment outside of the SME [92,94].
Start-ups and digitally-dependent SMEs are often not aware of the existence of cybersecurity standards [14]. These SMEs should first become aware and then begin to formulate basic cybersecurity policies, processes, and procedures [5,94]. Digitally-based SMEs should have formal processes in place to reinforce the desired cybersecurity behaviour of employees [5]. Digital enabler SMEs should strive towards continuous process improvement [5,15], which enables business continuity [92].
We mapped the “Technology” aspect of STS to the advised cybersecurity assessment approach and tooling for the SME. This is in line with the approach of Malatji et al. [5], who incorporated “cybersecurity tools and resources” in the “Technology” aspect of their socio-technical cybersecurity framework.
Start-ups should understand relevant cybersecurity asset types and digitally-dependent SMEs should begin identifying and documenting assets [92]. Without an asset inventory or internal cybersecurity expertise, most risk assessment and maturity model approaches are not suited to these SMEs. Additionally, they are just beginning to cultivate a desire among employees to improve cybersecurity. Incorporating the real-life threat environment [22] is an attractive option to promote motivation. Focusing on the real-life threat environment can increase the feelings of task relevance and significance employees feel, which are key motivators [97]. This is why we advise a threat-based cybersecurity risk assessment approach for start-ups and digitally-dependent SMEs.
In the same vein, we advise to not aggregate scores in cybersecurity assessment solutions for start-ups and digitally-dependent SMEs. If aggregation is deemed necessary, injective and idempotent aggregation strategies should be used, such as WLC and WM. Strategies that satisfy injectivity and idempotence can be seen as intuitive. Using these strategies allows for feelings of competence and relatedness among employees, which stimulate motivation [25]. This puts employees in a position to be a part of the solution to SME cybersecurity challenges, rather than being the source of the challenges [98].
The combination of simple aggregation and a threat-based approach offers another benefit: the corresponding assessments do not necessarily require extensive internal expertise and data. Many of the more complex aggregation strategies and comprehensive assessment approaches require cybersecurity experts at the SME to determine parameters and weights. Such resources are limited at SMEs [3], and especially at start-ups and digitally-dependent SMEs. This is why assessment approaches for these SMEs should preferably be largely based on data that can be automatically collected. Threat-based approaches are ideally suited to this requirement, as general incident data are widely available [99], and can be mapped to threats to offer SMEs insight into what is important for them [100].
Digitally-based SMEs and digital enablers can be expected to have a complete inventory of assets [92]. Digital enablers should additionally be aware of internal and external dependencies [92], allowing them to specify their attack surface [101]. For these SME categories, complete risk and maturity assessments are desirable. Digital enablers will often require comprehensive assessments that can prove compliance with cybersecurity standards and regulations.
Digitally-based SMEs should consider using aggregation strategies that reflect desirable security properties, such as the weakest link principle. Using a WCP strategy can guide these SMEs towards more accurate assessments, although intuitiveness is sacrificed. Digital enablers with cybersecurity expertise, a specified attack surface, and large volumes of internal data should consider more advanced aggregation strategies.
Figure 4 provides a visual summary of the STS interactions inherent to our framework. We use coloured arrows to indicate interactions that are explicitly mentioned in Table 10. It is implicit in the STS model of Davis et al. [7] that all aspects are interrelated.
The direction of the arrows indicates which aspect serves as an input for another aspect. For start-ups, the external environment aspects motivate the SME to realise the necessity of investing in cybersecurity, leading to the initial goals. For digitally-dependent SMEs, the goals formulated by management serve as catalysts for culture and processes. We observe that from an initial external motivation for start-ups, SMEs gradually build up internal interactions. For digital enablers, we see many interactions, both internally and with the external environment.

6. Discussion

We extensively analysed and interpreted our results in Section 4 and Section 5. This section will focus on a discussion of our research questions and the potential limitations of our research.
Our first research question asked: How are cybersecurity metrics aggregated in socio-technical cybersecurity measurement solutions? One interesting finding from Table 8 is that half of the research involving implementations did not aggregate at all. Table 2 gives a partial explanation for this phenomenon: no aggregation strategy satisfies all desirable security properties. Thus, aggregation should preferably be avoided. Nevertheless, aggregation using basic approaches such as WLC is prevalent, with 42 of our 60 inclusions using this aggregation technique. We observed a clear lack of dependency consideration among metrics, which could be solved using Bayesian network [83,84,85,86] or ANP techniques [50,51]. Our cybersecurity framework presented in Table 10 provides clear guidance on which aggregation strategies suit which SME categories.
Our second research question was formulated as: How do aggregation strategies differ in cybersecurity measurement solutions relevant to SMEs and all other solutions? Our analysis of Table 9 demonstrated that in enterprise research little to no attention is paid to aggregation strategies that satisfy the weakest link and dependency properties. One of the main obstacles in making aggregation strategies suitable for SMEs is the time and expertise required to carry them out. Generally, more complex aggregation strategies require the determination of more parameters and relationships, which in turn often requires consultation of security experts at the cyber-system being assessed (for example, [89,102,103,104]). This expertise is rarely available at smaller SMEs, although when it is, ANP approaches [50,51] could offer a path towards more accurate aggregation.
Our final research question covered the consideration of adaptability: “the state of being able to change to work or fit better” [18]. We found that very few papers consider the effects of missing data, dirty data, security events, or concept drift; all are vital elements in determining the ability of a solution to adapt to unexpected circumstances to work better. Research does often recognise the need for being able to change to fit better, as shown by the relatively large proportion that considers adaptation to other use cases. Nevertheless, there is still much to be gained in this area. It is vital that authors of research on socio-technical cybersecurity measurement solutions explicitly address the adaptability dimension in the future. Our framework of Table 10 helps in this regard, with its focus on proactive processes and active monitoring and detection capabilities.
We additionally analysed the ADKAR factors that were addressed in our inclusions. We found that desire was rarely considered in research. This was especially true for research focusing on the defender perspective. Additionally, we found that the real-life threat environment, as defined in Gollmann et al. [22], is considered in less than half of our inclusions. Both of these findings offer an interesting contrast to the increasingly important role SDT and PMT play in security research [25]. These theories focus heavily on (intrinsic) motivation and threat perception [24]. Given the low intrinsic motivation among SMEs and their employees to improve security [3], and the relatively large impact individual employees can have in the SME context, future research focusing on motivation and the real-life threat environment could provide an interesting avenue for making cybersecurity solutions more suitable to SMEs.

Limitations and Threats to Validity

We should mention at this stage that our research is not without its limitations. One potential issue is that our systematic review was not restricted to recent years, which meant that contemporary research was not as prominent in this review as it is in most other reviews. This could mean that we are overlooking certain recent developments, although 18 of our 60 inclusions were published in the past three years.
Additionally, although we believe our 60 inclusions are sufficient to help us answer our research questions, certain groupings of the inclusions resulted in relatively small sub-samples from which to draw conclusions. This could limit the generalisability of our analysis and conclusions, meaning that one could have different findings when considering different cybersecurity focus areas.
We believe in the construct validity of our systematic review methodology SYMBALS [58], as it is based on widely-accepted methods [62,63] and guidelines [59,60,61]. However, it is still a novel methodology that remains to be extensively tested. We feel this does not threaten the validity of our research, since SYMBALS is geared towards reproducibility and satisfies standard reporting item guidelines for systematic reviews [61].
A final mention should be made of our choice to approach the social dimension through the ADKAR change management model [27]. Although the model has been applied in the cybersecurity domain [28], it is certainly not a standard approach to use ADKAR in this setting. Nevertheless, Table 5 summarised the natural mapping of social cybersecurity metric concepts to the ADKAR framework and our framework presented in Table 10 showed how the ADKAR terms can be instinctively imported from previous research. Hence, we feel justified in using this approach.

7. Conclusions and Future Research

Businesses, and especially small- and medium-sized enterprises (SMEs), struggle to cope with the existing cyber threat landscape. Researchers have turned to cybersecurity measurement to deal with these issues, although many challenges remain, such as how to aggregate sub-metrics into higher-level metrics [18]. The challenges faced by SMEs are compounded by the dynamic nature of the cyber threat landscape, necessitating adaptable solutions. These current challenges motivated us to investigate the topics of aggregation and adaptability in this review, with a focus on SMEs.
The social side of cybersecurity deserves attention, certainly in the SME context. This is why we chose to direct our review at socio-technical cybersecurity measurement solutions. The ADKAR (awareness, desire, knowledge, ability, reinforcement) change management model of Hiatt [27] guided us in covering the social dimensions considered in research. To aid in the analysis of aggregation approaches, we outlined five main aggregation strategy classes in Section 2.3: weighted linear combinations, weighted products, weighted maxima, weighted complementary products, and Bayesian networks. We looked towards existing research to determine interesting dimensions of adaptability, such as missing or dirty data [56] and concept drift [57].
Based on our analysis in Section 2.3 and Section 4, we found that aggregation should only be carried out if necessary, since no single aggregation strategy exists that satisfies all of the desired security properties. Notably, dependencies among metrics are often not considered. Solutions can be found in this area in Bayesian networks [83,84,85,86] and analytic network process [50,51] techniques.
We used our findings as input to construct a socio-technical cybersecurity framework for SMEs. We presented our framework in Table 10 and visualised it in Figure 4. Offering a single solution for all SMEs is too simplistic. This is why we divided SMEs into four categories, as suggested by the European DIGITAL SME Alliance [14]: start-ups, digitally-dependent SMEs, digitally-based SMEs, and digital enablers. By detailing what can be expected of each SME category, we were able to determine which cybersecurity assessment strategies were suitable in each case. For start-ups and digitally-dependent SMEs, threat-based risk assessment approaches that either do not aggregate or use intuitive aggregation strategies are ideal. By focusing on the real-life threat environment [22], the relevance and significance of the assessment task are given a central role. A simple and intuitive aggregation strategy accommodates feelings of competence and relatedness. Altogether, this ensures optimal organisation and employee motivation [25,97].
Digitally-based SMEs and digital enablers are advised to use more comprehensive risk assessment approaches and maturity models. These assessment techniques should assist in working towards or proving compliance with standards and regulations. Under ideal circumstances, this will build trust in the cybersecurity posture of the SME, both internally and externally. Digital enablers are also prime candidates for using more advanced aggregation strategies, such as Bayesian networks, since they often have the cybersecurity expertise and data required to make these solutions successful.
We hope that our socio-technical cybersecurity framework will provide a basis to design successful cybersecurity assessment solutions for SMEs. SMEs should not be forced to use solutions that are not suited to their situation. Especially start-ups and digitally-dependent SMEs currently lack suitable cybersecurity assessment solutions, even though they are most in need of “easily understandable and practical solutions” [14]. In future work, we aim to help these SMEs to become more secure. An important first step is to formulate a properly motivated, intuitive, and usable threat-based cybersecurity risk assessment approach, to offer this most vulnerable group some deserved cybersecurity respite.

Author Contributions

Conceptualisation: M.v.H., M.B. and M.S.; methodology: M.v.H., B.Y.O., M.B. and M.S.; software: M.v.H.; validation: M.v.H., B.Y.O., M.B. and M.S.; formal analysis: M.v.H. and B.Y.O.; investigation: M.v.H.; data curation: M.v.H.; writing—original draft preparation: M.v.H.; writing—review and editing: M.v.H., B.Y.O., M.B. and M.S.; visualisation: M.v.H.; supervision: M.B. and M.S.; project administration: M.v.H. All authors have read and agreed to the published version of the manuscript.


This work was made possible with funding from the European Union’s Horizon 2020 research and innovation programme, under grant agreement No. 883588 (GEIGER). The opinions expressed and arguments employed herein do not necessarily reflect the official views of the funding body.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in the Appendixes A and B of this paper.


The authors would like to thank Rens van de Schoot and the ASReview team for their cooperation.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.


The following abbreviations are used in this manuscript:
ADKARAwareness, Desire, Knowledge, Ability, Reinforcement
ANPAnalytic Network Process
BNBayesian Network
PMTProtection Motivation Theory
RQResearch Question
SDTSelf-Determination Theory
SMESmall- and Medium-Sized Enterprises
STSSocio-Technical Systems
SYMBALSSYstematic review Methodology Blending Active Learning and Snowballing
WCPWeighted Complementary Product
WLCWeighted Linear Combination
WMWeighted Maximum
WPWeighted Product

Appendix A. Data Extraction Items

Of the 60 papers that were determined to consider social factors, the data extraction items of Table A1 were extracted, along with general data such as title, abstract, keywords, and number of citations.
Table A1. The data items extracted from the 60 inclusions in our research.
Table A1. The data items extracted from the 60 inclusions in our research.
Security assessment conceptThe security concept assessed in the paper.Awareness, maturity, resilience, risk, security, vulnerability
Human cybersecurity aspectsThe human cybersecurity aspect(s) of the ADKAR framework that were considered.Awareness, desire, knowledge, ability, reinforcement
Metric aggregation strategiesThe metric aggregation strategy or strategies employed.Weighted linear combination, weighted product, weighted maximum, weighted complementary product, Bayesian network
Metric aggregation descriptionA description of the employed metric aggregation strategy.Description text
Adaptability action missing dataThe suggested response to deal with missing data.Substitute, communicate, ignore, impossible, not considered
Missing data descriptionDescription of the suggested response to deal with missing data.Description text
Adaptability action dirty dataThe suggested response to deal with dirty data.Clean, substitute, communicate, ignore, impossible, not considered
Dirty data descriptionDescription of the suggested response to deal with dirty data.Description text
Adaptability action security eventThe suggested response to deal with a security event.No action, parameter tuning, model reformulation, impossible, not considered
Security event descriptionDescription of the suggested response to deal with a security event.Description text
Adaptability action concept driftThe suggested response to deal with concept drift.No action, parameter tuning, model reformulation, impossible, not considered
Concept drift descriptionDescription of the suggested response to deal with concept drift.Description text
Concept drift considerationDescription of the author consideration of concept drift.Description text
Research classificationA classification of the research type.Theoretical, implementation, review
Research application areaApplication area of the research.Enterprise, other
Enterprise size(s)For enterprise research, an indication of the enterprise size(s) the solution is applicable for.Small, medium, large, any
Social viewpointAn indication of which actors were considered from the social viewpoint.Attacker, defender, both, unclear
Real-life threatAn indication of whether the paper considers the real-life threat environment [22].Yes, no, unclear
Physical dimensionAn indication of whether the paper considers the physical dimension of security.Yes, no, unclear
Validation methodThe validation method employed in the research [29].Hypothetical, empirical, simulation, theoretical
Validation method descriptionA description of the validation method.Description text

Appendix B. Detailed Results

Table A2. The inclusions of our research and a selection of the data items that were presented in this paper. Together with the data items presented in Table A3, they constitute all data items analysed in Section 4.
Table A2. The inclusions of our research and a selection of the data items that were presented in this paper. Together with the data items presented in Table A3, they constitute all data items analysed in Section 4.
ResearchYearAssessment ConceptClassificationSocial Viewpoint
Dantu and Kolan [83]2005RiskTheoreticalAttacker
Depoy et al. [105]2005RiskTheoreticalAttacker
Hasle et al. [106]2005ResilienceTheoreticalDefender
Villarrubia et al. [107]2006MaturityTheoreticalDefender
Bhilare et al. [74]2008SecurityTheoreticalDefender
Grunske and Joyce [108]2008RiskTheoreticalAttacker
Sahinoglu [84]2008RiskTheoreticalDefender
Dantu et al. [85]2009RiskTheoreticalAttacker
Chen and Wang [87]2010RiskTheoreticalDefender
Chan [88]2011RiskTheoreticalDefender
Shin et al. [78]2011VulnerabilityTheoreticalDefender
Bojanc et al. [109]2012RiskTheoreticalDefender
Lo and Chen [50]2012RiskTheoreticalDefender
Rantos et al. [110]2012AwarenessTheoreticalDefender
Shameli-Sendi et al. [80]2012RiskTheoreticalDefender
Bojanc and Jerman-Blažič [111]2013RiskTheoreticalDefender
Marconato et al. [79]2013RiskTheoreticalDefender
Taubenberger et al. [112]2013RiskImplementationDefender
Alencar Rigon et al. [102]2014MaturityTheoreticalDefender
Boggs et al. [113]2014ResilienceTheoreticalDefender
Chen et al. [114]2014RiskTheoreticalDefender
Cheng et al. [115]2014AwarenessTheoreticalDefender
Feng et al. [86]2014RiskTheoreticalDefender
Manifavas et al. [77]2014AwarenessImplementationDefender
Silva et al. [81]2014RiskTheoreticalDefender
Suhartana et al. [116]2014RiskTheoreticalDefender
Yadav and Dong [117]2014RiskTheoreticalDefender
Dehghanimohammadabadi and Bamakan [118]2015RiskTheoreticalDefender
Juliadotter and Choo [119]2015RiskTheoreticalBoth
Otero [120]2015RiskTheoreticalDefender
Solic et al. [121]2015RiskTheoreticalDefender
Sugiura et al. [122]2015RiskTheoreticalDefender
Wei et al. [123]2015RiskTheoreticalDefender
You et al. [75]2015SecurityTheoreticalDefender
Brožová et al. [51]2016RiskTheoreticalDefender
Brynielsson et al. [124]2016AwarenessTheoreticalDefender
Granåsen and Andersson [125]2016ResilienceTheoreticalDefender
Orojloo and Azgomi [126]2016RiskTheoreticalAttacker
Aiba and Hiromatsu [127]2017RiskTheoreticalDefender
Damenu and Beaumont [103]2017RiskImplementationDefender
Ramos et al. [41]2017Review-Defender
Rass et al. [52]2017RiskTheoreticalDefender
Alohali et al. [128]2018RiskTheoreticalDefender
Li et al. [82]2018RiskTheoreticalDefender
Morrison et al. [43]2018Review-Both
Pramod and Bharathi [129]2018RiskTheoreticalDefender
Proença and Borbinha [89]2018MaturityImplementationDefender
Rueda and Avila [130]2018RiskTheoreticalDefender
Shokouhyar et al. [104]2018RiskTheoreticalDefender
Stergiopoulos et al. [131]2018RiskTheoreticalDefender
You et al. [132]2018MaturityTheoreticalDefender
Akinsanya et al. [133]2019MaturityTheoreticalDefender
Bharathi [134]2019RiskTheoreticalDefender
Fertig et al. [135]2019AwarenessTheoreticalDefender
Salih et al. [136]2019RiskTheoreticalDefender
Cadena et al. [34]2020Review-Defender
Wirtz and Heisel [137]2020RiskTheoreticalDefender
Ganin et al. [138]2020RiskTheoreticalDefender
Luh et al. [90]2020RiskTheoreticalBoth
Table A3. The inclusions of our research and a selection of the data items that were presented in this paper.
Table A3. The inclusions of our research and a selection of the data items that were presented in this paper.
ResearchApplication AreaADKAR ElementsAggregation StrategiesReal-Life Threat
Dantu and Kolan [83]OtherDE, ABBNYes
Depoy et al. [105]OtherDE, ABWP, WCPYes
Hasle et al. [106]Enterprise (A)AWWLCNo
Villarrubia et al. [107]Enterprise (A)REWLCNo
Bhilare et al. [74]Enterprise (M/L)RENoneYes
Grunske and Joyce [108]OtherDE, ABWP, WMYes
Sahinoglu [84]OtherREWLC, BNYes
Dantu et al. [85]OtherDE, ABBNYes
Chen and Wang [87]OtherAWWPNo
Chan [88]Enterprise (M/L)REWLC, WPNo
Shin et al. [78]OtherAW, KNWLCNo
Bojanc et al. [109]Enterprise (A)REWLC, WMYes
Lo and Chen [50]Enterprise (M/L)AW, REWLCNo
Rantos et al. [110]Enterprise (A)AW, DE, KN, AB, REWLCYes
Shameli-Sendi et al. [80]Enterprise (A)KN, ABWCNo
Bojanc and Jerman-Blažič [111]Enterprise (A)AW, REWLCYes
Marconato et al. [79]OtherAW, DENoneNo
Taubenberger et al. [112]Enterprise (M/L)AW, RENoneNo
Alencar Rigon et al. [102]Enterprise (M/L)REWLCNo
Boggs et al. [113]OtherAWWLCYes
Chen et al. [114]Enterprise (M/L)REWLCNo
Cheng et al. [115]OtherAWWLC, WP, WMYes
Feng et al. [86]Enterprise (M/L)AW, REBNYes
Manifavas et al. [77]Enterprise (A)AW, DE, KN, AB, REWLCYes
Silva et al. [81]OtherREWLC, WPNo
Suhartana et al. [116]Enterprise (A)AB, REWLCYes
Yadav and Dong [117]OtherAW, KN, AB, RENoneYes
Dehghanimohammadabadi and Bamakan [118]Enterprise (M/L)REWLC, WPYes
Juliadotter and Choo [119]OtherAW, DE, KN, AB, REWLCYes
Otero [120]Enterprise (M/L)AW, AB, RENoneYes
Solic et al. [121]Enterprise (A)AW, KN, ABWLCNo
Sugiura et al. [122]Enterprise (A)AW, RENoneNo
Wei et al. [123]OtherAW, ABWLCNo
You et al. [75]OtherAW, REWLCNo
Brožová et al. [51]Enterprise (A)AW, KN, REWLCNo
Brynielsson et al. [124]OtherAW, KN, ABNoneNo
Granåsen and Andersson [125]OtherAW, DE, AB, REWLCNo
Orojloo and Azgomi [126]OtherKN, ABWLC, WM, WPYes
Aiba and Hiromatsu [127]Enterprise (A)REWLCYes
Damenu and Beaumont [103]Enterprise (M/L)AW, DE, KN, AB, RENoneNo
Ramos et al. [41]OtherREWLCNo
Rass et al. [52]OtherREWLCYes
Alohali et al. [128]OtherAW, KN, ABWLC, WMNo
Li et al. [82]OtherAW, REWPNo
Morrison et al. [43]OtherDE, AB, REWLCNo
Pramod and Bharathi [129]Enterprise (A)AW, KN, REWLC, WMNo
Proença and Borbinha [89]Enterprise (M/L)REWMNo
Rueda and Avila [130]Enterprise (M/L)AW, KN, AB, REWLC, WPNo
Shokouhyar et al. [104]Enterprise (M/L)REWLC, WMNo
Stergiopoulos et al. [131]OtherAW, KN, AB, REWLC, WPYes
You et al. [132]OtherREWLCNo
Akinsanya et al. [133]OtherREWLCNo
Bharathi [134]OtherAWWLCYes
Fertig et al. [135]OtherAW, DE, KNNoneNo
Husák et al.[31]OtherABWLCYes
Salih et al. [136]OtherAW, KN, REWLCNo
Cadena et al. [34]OtherAWNoneNo
Wirtz and Heisel [137]OtherAW, KN, AB, REWLC, WMNo
Ganin et al. [138]Enterprise (A)AW, DE, KN, AB, REWLCNo
Luh et al. [90]OtherAW, DE, AB, REWLCYes


  1. Bassett, G.; Hylender, C.D.; Langlois, P.; Pinto, A.; Widup, S. Data Breach Investigations Report; Technical Report; Verizon: New York, NY, USA, 2021. [Google Scholar]
  2. Bissell, K.; Lasalle, R.M. Cost of Cybercrime Study; Technical Report; Accenture and Ponemon Institute: Dublin, Ireland, 2019. [Google Scholar]
  3. Heidt, M.; Gerlach, J.P.; Buxmann, P. Investigating the Security Divide between SME and Large Companies: How SME Characteristics Influence Organizational IT Security Investments. Inf. Syst. Front. 2019, 21, 1285–1305. [Google Scholar] [CrossRef]
  4. Ponemon Institute. Global State of Cybersecurity in Small and Medium-Sized Businesses; Technical Report, Keeper Security; Ponemon Institute: Traverse City, MI, USA, 2019. [Google Scholar]
  5. Malatji, M.; Von Solms, S.; Marnewick, A. Socio-Technical Systems Cybersecurity Framework. Inf. Comput. Secur. 2019, 27, 233–272. [Google Scholar] [CrossRef]
  6. Carías, J.F.; Arrizabalaga, S.; Labaka, L.; Hernantes, J. Cyber Resilience Progression Model. Appl. Sci. 2020, 10, 7393. [Google Scholar] [CrossRef]
  7. Davis, M.C.; Challenger, R.; Jayewardene, D.N.W.; Clegg, C.W. Advancing Socio-Technical Systems Thinking: A Call for Bravery. Appl. Ergon. 2014, 45, 171–180. [Google Scholar] [CrossRef]
  8. Gratian, M.; Bandi, S.; Cukier, M.; Dykstra, J.; Ginther, A. Correlating Human Traits and Cyber Security Behavior Intentions. Comput. Secur. 2018, 73, 345–358. [Google Scholar] [CrossRef]
  9. Shojaifar, A.; Fricker, S.A.; Gwerder, M. Automating the Communication of Cybersecurity Knowledge: Multi-Case Study. In Proceedings of the IFIP Advances in Information and Communication Technology, Maribor, Slovenia, 21–23 September 2020; Drevin, L., Von Solms, S., Theocharidou, M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 110–124. [Google Scholar]
  10. Benz, M.; Chatterjee, D. Calculated Risk? A Cybersecurity Evaluation Tool for SMEs. Bus. Horizons 2020, 63, 531–540. [Google Scholar] [CrossRef]
  11. Jaquith, A. Security Metrics: Replacing Fear, Uncertainty, and Doubt; Pearson Education: London, UK, 2007. [Google Scholar]
  12. Slayton, R. Measuring Risk: Computer Security Metrics, Automation, and Learning. IEEE Ann. Hist. Comput. 2015, 37, 32–45. [Google Scholar] [CrossRef]
  13. Yigit Ozkan, B.; Spruit, M.; Wondolleck, R.; Burriel Coll, V. Modelling Adaptive Information Security for SMEs in a Cluster. J. Intellect. Cap. 2019, 21, 235–256. [Google Scholar] [CrossRef] [Green Version]
  14. European DIGITAL SME Alliance. The EU Cybersecurity Act and the Role of Standards for SMEs—Position Paper; Technical Report; European DIGITAL SME Alliance: Brussels, Belgium, 2020. [Google Scholar]
  15. Cholez, H.; Girard, F. Maturity Assessment and Process Improvement for Information Security Management in Small and Medium Enterprises. J. Softw. Evol. Process 2014, 26, 496–503. [Google Scholar] [CrossRef]
  16. Mijnhardt, F.; Baars, T.; Spruit, M. Organizational Characteristics Influencing SME Information Security Maturity. J. Comput. Inf. Syst. 2016, 56, 106–115. [Google Scholar] [CrossRef] [Green Version]
  17. Pendleton, M.; Garcia-Lebron, R.; Cho, J.H.; Xu, S. A Survey on Systems Security Metrics. ACM Comput. Surv. 2016, 49, 62:1–62:35. [Google Scholar] [CrossRef] [Green Version]
  18. Cho, J.H.; Xu, S.; Hurley, P.M.; Mackay, M.; Benjamin, T.; Beaumont, M. STRAM: Measuring the Trustworthiness of Computer-Based Systems. ACM Comput. Surv. 2019, 51, 1–47. [Google Scholar] [CrossRef]
  19. Refsdal, A.; Solhaug, B.; Stolen, K. Cyber-Risk Management; SpringerBriefs in Computer Science; Springer International Publishing: Cham, Switzerland, 2015. [Google Scholar]
  20. Böhme, R.; Freiling, F.C. On Metrics and Measurements. In Dependability Metrics: Advanced Lectures; Eusgeld, I., Freiling, F.C., Reussner, R., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 7–13. [Google Scholar]
  21. Martens, M.; De Wolf, R.; De Marez, L. Investigating and Comparing the Predictors of the Intention towards Taking Security Measures against Malware, Scams and Cybercrime in General. Comput. Hum. Behav. 2019, 92, 139–150. [Google Scholar] [CrossRef]
  22. Gollmann, D.; Herley, C.; Koenig, V.; Pieters, W.; Sasse, M.A. Socio-Technical Security Metrics. Dagstuhl Rep. 2015, 2015. [Google Scholar] [CrossRef]
  23. Selbst, A.D.; Boyd, D.; Friedler, S.A.; Venkatasubramanian, S.; Vertesi, J. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 59–68. [Google Scholar] [CrossRef]
  24. Padayachee, K. Taxonomy of Compliant Information Security Behavior. Comput. Secur. 2012, 31, 673–680. [Google Scholar] [CrossRef]
  25. Menard, P.; Bott, G.J.; Crossler, R.E. User Motivations in Protecting Information Security: Protection Motivation Theory Versus Self-Determination Theory. J. Manag. Inf. Syst. 2017, 34, 1203–1230. [Google Scholar] [CrossRef]
  26. Herath, T.; Rao, H.R. Encouraging Information Security Behaviors in Organizations: Role of Penalties, Pressures and Perceived Effectiveness. Decis. Support Syst. 2009, 47, 154–165. [Google Scholar] [CrossRef]
  27. Hiatt, J. ADKAR: A Model for Change in Business, Government, and Our Community; Prosci: Fort Collins, CO, USA, 2006. [Google Scholar]
  28. Da Veiga, A. An Approach to Information Security Culture Change Combining ADKAR and the ISCA Questionnaire to Aid Transition to the Desired Culture. Inf. Comput. Secur. 2018, 26, 584–612. [Google Scholar] [CrossRef]
  29. Verendel, V. Quantified Security Is a Weak Hypothesis: A Critical Survey of Results and Assumptions. In Proceedings of the 2009 Workshop on New Security Paradigms Workshop, Oxford, UK, 8–11 September 2009; Association for Computing Machinery: Oxford, UK, 2009; pp. 37–50. [Google Scholar] [CrossRef]
  30. Rudolph, M.; Schwarz, R. A Critical Survey of Security Indicator Approaches. In Proceedings of the 2012 Seventh International Conference on Availability, Reliability and Security, Prague, Czech Republic, 20–24 August 2012; pp. 291–300. [Google Scholar] [CrossRef]
  31. Husák, M.; Komárková, J.; Bou-Harb, E.; Čeleda, P. Survey of Attack Projection, Prediction, and Forecasting in Cyber Security. IEEE Commun. Surv. Tutor. 2019, 21, 640–660. [Google Scholar] [CrossRef] [Green Version]
  32. Iannacone, M.D.; Bridges, R.A. Quantifiable & Comparable Evaluations of Cyber Defensive Capabilities: A Survey & Novel, Unified Approach. Comput. Secur. 2020, 96, 101907. [Google Scholar] [CrossRef]
  33. Kordy, B.; Piètre-Cambacédès, L.; Schweitzer, P. DAG-Based Attack and Defense Modeling: Don’t Miss the Forest for the Attack Trees. Comput. Sci. Rev. 2014, 13–14, 1–38. [Google Scholar] [CrossRef] [Green Version]
  34. Cadena, A.; Gualoto, F.; Fuertes, W.; Tello-Oquendo, L.; Andrade, R.; Tapia, F.; Torres, J. Metrics and Indicators of Information Security Incident Management: A Systematic Mapping Study. In Developments and Advances in Defense and Security; Rocha, Á., Pereira, R.P., Eds.; Springer: Singapore, 2020; pp. 507–519. [Google Scholar]
  35. Knowles, W.; Prince, D.; Hutchison, D.; Disso, J.F.P.; Jones, K. A Survey of Cyber Security Management in Industrial Control Systems. Int. J. Crit. Infrastruct. Prot. 2015, 9, 52–80. [Google Scholar] [CrossRef]
  36. Asghar, M.R.; Hu, Q.; Zeadally, S. Cybersecurity in Industrial Control Systems: Issues, Technologies, and Challenges. Comput. Netw. 2019, 165, 106946. [Google Scholar] [CrossRef]
  37. Eckhart, M.; Brenner, B.; Ekelhart, A.; Weippl, E. Quantitative Security Risk Assessment for Industrial Control Systems: Research Opportunities and Challenges. J. Internet Serv. Inf. Secur. 2019. [Google Scholar] [CrossRef]
  38. Jing, X.; Yan, Z.; Pedrycz, W. Security Data Collection and Data Analytics in the Internet: A Survey. IEEE Commun. Surv. Tutor. 2019, 21, 586–618. [Google Scholar] [CrossRef] [Green Version]
  39. Sengupta, S.; Chowdhary, A.; Sabur, A.; Alshamrani, A.; Huang, D.; Kambhampati, S. A Survey of Moving Target Defenses for Network Security. IEEE Commun. Surv. Tutor. 2020. [Google Scholar] [CrossRef] [Green Version]
  40. Liang, X.; Xiao, Y. Game Theory for Network Security. IEEE Commun. Surv. Tutor. 2013, 15, 472–486. [Google Scholar] [CrossRef] [Green Version]
  41. Ramos, A.; Lazar, M.; Filho, R.H.; Rodrigues, J.J.P.C. Model-Based Quantitative Network Security Metrics: A Survey. IEEE Commun. Surv. Tutor. 2017, 19, 2704–2734. [Google Scholar] [CrossRef]
  42. Cherdantseva, Y.; Burnap, P.; Blyth, A.; Eden, P.; Jones, K.; Soulsby, H.; Stoddart, K. A Review of Cyber Security Risk Assessment Methods for SCADA Systems. Comput. Secur. 2016, 56, 1–27. [Google Scholar] [CrossRef] [Green Version]
  43. Morrison, P.; Moye, D.; Pandita, R.; Williams, L. Mapping the Field of Software Life Cycle Security Metrics. Inf. Softw. Technol. 2018, 102, 146–159. [Google Scholar] [CrossRef]
  44. He, W.; Li, H.; Li, J. Unknown Vulnerability Risk Assessment Based on Directed Graph Models: A Survey. IEEE Access 2019, 7, 168201–168225. [Google Scholar] [CrossRef]
  45. Xie, H.; Yan, Z.; Yao, Z.; Atiquzzaman, M. Data Collection for Security Measurement in Wireless Sensor Networks: A Survey. IEEE Internet Things J. 2019, 6, 2205–2224. [Google Scholar] [CrossRef] [Green Version]
  46. Ray, J.; Marshall, H.; De Sousa, V.; Jean, J.; Warren, S.; Bachand, S. Cyber Threatscape Report. 2020. Available online: (accessed on 16 November 2020).
  47. Riordan, J.; Lippmann, R.P. Threat-Based Risk Assessment for Enterprise Networks. Linc. Lab. J. 2016, 22, 33–45. [Google Scholar]
  48. Ferguson, N.; Schneier, B. Practical Cryptography; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar]
  49. Lippmann, R.P.; Riordan, J.F.; Yu, T.; Watson, K.K. Continuous Security Metrics for Prevalent Network Threats: Introduction and First Four Metrics; Technical Report; Massachusetts Institute of Technology Lexington Lincoln Lab: Lexington, KY, USA, 2012. [Google Scholar]
  50. Lo, C.C.; Chen, W.J. A Hybrid Information Security Risk Assessment Procedure Considering Interdependences between Controls. Expert Syst. Appl. 2012, 39, 247–257. [Google Scholar] [CrossRef]
  51. Brožová, H.; Šup, L.; Rydval, J.; Sadok, M.; Bednar, P. Information Security Management: ANP Based Approach for Risk Analysis and Decision Making. AGRIS Line Pap. Econ. Inform. 2016, 8, 1–11. [Google Scholar] [CrossRef] [Green Version]
  52. Rass, S.; Alshawish, A.; Abid, M.A.; Schauer, S.; Zhu, Q.; Meer, H.D. Physical Intrusion Games—Optimizing Surveillance by Simulation and Game Theory. IEEE Access 2017, 5, 8394–8407. [Google Scholar] [CrossRef]
  53. Evesti, A.; Ovaska, E. Comparison of Adaptive Information Security Approaches. ISRN Artif. Intell. 2013, 2013. [Google Scholar] [CrossRef] [Green Version]
  54. Baars, T.; Mijnhardt, F.; Vlaanderen, K.; Spruit, M. An Analytics Approach to Adaptive Maturity Models Using Organizational Characteristics. Decis. Anal. 2016, 3, 5. [Google Scholar] [CrossRef] [Green Version]
  55. de las Cuevas, P.; Mora, A.M.; Merelo, J.J.; Castillo, P.A.; García-Sánchez, P.; Fernández-Ares, A. Corporate Security Solutions for BYOD: A Novel User-Centric and Self-Adaptive System. Comput. Commun. 2015, 68, 83–95. [Google Scholar] [CrossRef]
  56. Kim, W.; Choi, B.J.; Hong, E.K.; Kim, S.K.; Lee, D. A Taxonomy of Dirty Data. Data Min. Knowl. Discov. 2003, 7, 81–99. [Google Scholar] [CrossRef]
  57. Widmer, G.; Kubat, M. Learning in the Presence of Concept Drift and Hidden Contexts. Mach. Learn. 1996, 23, 69–101. [Google Scholar] [CrossRef] [Green Version]
  58. van Haastrecht, M.; Sarhan, I.; Yigit Ozkan, B.; Brinkhuis, M.; Spruit, M. SYMBALS: A Systematic Review Methodology Blending Active Learning and Snowballing. Front. Res. Metrics Anal. 2021, 6. [Google Scholar] [CrossRef]
  59. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report; Keele University and Durham University Joint Report: Keele, UK, 2007. [Google Scholar]
  60. Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.A.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. J. Clin. Epidemiol. 2009, 62, e1–e34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A.; PRISMA-P Group. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 Statement. Syst. Rev. 2015, 4, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. van de Schoot, R.; de Bruin, J.; Schram, R.; Zahedi, P.; de Boer, J.; Weijdema, F.; Kramer, B.; Huijts, M.; Hoogerwerf, M.; Ferdinands, G.; et al. An Open Source Machine Learning Framework for Efficient and Transparent Systematic Reviews. Nat. Mach. Intell. 2021, 3, 125–133. [Google Scholar] [CrossRef]
  63. Wohlin, C. Guidelines for Snowballing in Systematic Literature Studies and a Replication in Software Engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, London, UK, 13–14 May 2014; Association for Computing Machinery: London, UK, 2014; pp. 1–10. [Google Scholar] [CrossRef]
  64. Mourão, E.; Pimentel, J.F.; Murta, L.; Kalinowski, M.; Mendes, E.; Wohlin, C. On the Performance of Hybrid Search Strategies for Systematic Literature Reviews in Software Engineering. Inf. Softw. Technol. 2020, 123, 106294. [Google Scholar] [CrossRef]
  65. Rose, M.E.; Kitchin, J.R. Pybliometrics: Scriptable Bibliometrics Using a Python Interface to Scopus. SoftwareX 2019, 10, 100263. [Google Scholar] [CrossRef]
  66. Harrison, H.; Griffin, S.J.; Kuhn, I.; Usher-Smith, J.A. Software Tools to Support Title and Abstract Screening for Systematic Reviews in Healthcare: An Evaluation. BMC Med Res. Methodol. 2020, 20, 7. [Google Scholar] [CrossRef]
  67. Brereton, P.; Kitchenham, B.A.; Budgen, D.; Turner, M.; Khalil, M. Lessons from Applying the Systematic Literature Review Process within the Software Engineering Domain. J. Syst. Softw. 2007, 80, 571–583. [Google Scholar] [CrossRef] [Green Version]
  68. Stolfo, S.; Bellovin, S.M.; Evans, D. Measuring Security. IEEE Secur. Priv. 2011, 9, 60–65. [Google Scholar] [CrossRef]
  69. Noel, S.; Jajodia, S. Metrics Suite for Network Attack Graph Analytics. In Proceedings of the 9th Annual Cyber and Information Security Research Conference, Oak Ridge, TN, USA, 8–10 April 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 5–8. [Google Scholar] [CrossRef]
  70. Spruit, M.; Roeling, M. ISFAM: The Information Security Focus Area Maturity Model. In Proceedings of the ECIS 2014, Atlanta, GA, USA, 9–11 June 2014. [Google Scholar]
  71. Allodi, L.; Massacci, F. Security Events and Vulnerability Data for Cybersecurity Risk Estimation. Risk Anal. 2017, 37, 1606–1627. [Google Scholar] [CrossRef]
  72. Cormack, G.V.; Grossman, M.R. Engineering Quality and Reliability in Technology-Assisted Review. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, Pisa, Italy, 17–21 July 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 75–84. [Google Scholar] [CrossRef] [Green Version]
  73. Zhou, Y.; Zhang, H.; Huang, X.; Yang, S.; Babar, M.A.; Tang, H. Quality Assessment of Systematic Reviews in Software Engineering: A Tertiary Study. In Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, Nanjing, China, 27–29 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 1–14. [Google Scholar] [CrossRef]
  74. Bhilare, D.S.; Ramani, A.; Tanwani, S. Information Security Assessment and Reporting: Distributed Defense. J. Comput. Sci. 2008, 4, 864–872. [Google Scholar]
  75. You, Y.; Oh, S.; Lee, K. Advanced Security Assessment for Control Effectiveness. In Information Security Applications; Lecture Notes in Computer Science; Rhee, K.H., Yi, J.H., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 383–393. [Google Scholar]
  76. Yang, N.; Singh, T.; Johnston, A. A Replication Study of User Motivation in Protecting Information Security Using Protection Motivation Theory and Self Determination Theory. AIS Trans. Replication Res. 2020, 6. [Google Scholar] [CrossRef]
  77. Manifavas, C.; Fysarakis, K.; Rantos, K.; Hatzivasilis, G. DSAPE—Dynamic Security Awareness Program Evaluation. In Human Aspects of Information Security, Privacy, and Trust; Lecture Notes in Computer Science; Tryfonas, T., Askoxylakis, I., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 258–269. [Google Scholar]
  78. Shin, Y.; Meneely, A.; Williams, L.; Osborne, J.A. Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities. IEEE Trans. Softw. Eng. 2011, 37, 772–787. [Google Scholar] [CrossRef] [Green Version]
  79. Marconato, G.V.; Kaâniche, M.; Nicomette, V. A Vulnerability Life Cycle-Based Security Modeling and Evaluation Approach. Comput. J. 2013, 56, 422–439. [Google Scholar] [CrossRef] [Green Version]
  80. Shameli-Sendi, A.; Shajari, M.; Hassanabadi, M.; Jabbarifar, M.; Dagenais, M. Fuzzy Multi-Criteria Decision-Making for Information Security Risk Assessment. Open Cybern. Syst. J. 2012, 6, 26–37. [Google Scholar] [CrossRef]
  81. Silva, M.M.; de Gusmão, A.P.H.; Poleto, T.; e Silva, L.C.; Costa, A.P.C.S. A Multidimensional Approach to Information Security Risk Management Using FMEA and Fuzzy Theory. Int. J. Inf. Manag. 2014, 34, 733–740. [Google Scholar] [CrossRef]
  82. Li, X.; Li, H.; Sun, B.; Wang, F. Assessing Information Security Risk for an Evolving Smart City Based on Fuzzy and Grey FMEA. J. Intell. Fuzzy Syst. 2018, 34, 2491–2501. [Google Scholar] [CrossRef]
  83. Dantu, R.; Kolan, P. Risk Management Using Behavior Based Bayesian Networks. In Intelligence and Security Informatics; Lecture Notes in Computer Science; Kantor, P., Muresan, G., Roberts, F., Zeng, D.D., Wang, F.Y., Chen, H., Merkle, R.C., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 115–126. [Google Scholar]
  84. Sahinoglu, M. An Input–Output Measurable Design for the Security Meter Model to Quantify and Manage Software Security Risk. IEEE Trans. Instrum. Meas. 2008, 57, 1251–1260. [Google Scholar] [CrossRef]
  85. Dantu, R.; Kolan, P.; Cangussu, J. Network Risk Management Using Attacker Profiling. Secur. Commun. Netw. 2009, 2, 83–96. [Google Scholar] [CrossRef]
  86. Feng, N.; Wang, H.J.; Li, M. A Security Risk Analysis Model for Information Systems: Causal Relationships of Risk Factors and Vulnerability Propagation Analysis. Inf. Sci. 2014, 256, 57–73. [Google Scholar] [CrossRef]
  87. Chen, M.K.; Wang, S.C. A Hybrid Delphi-Bayesian Method to Establish Business Data Integrity Policy: A Benchmark Data Center Case Study. Kybernetes 2010, 39, 800–824. [Google Scholar] [CrossRef]
  88. Chan, C.L. Information Security Risk Modeling Using Bayesian Index. Comput. J. 2011, 54, 628–638. [Google Scholar] [CrossRef]
  89. Proença, D.; Borbinha, J. Information Security Management Systems—A Maturity Model Based on ISO/IEC 27001. In Business Information Systems; Lecture Notes in Business Information Processing; Abramowicz, W., Paschke, A., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 102–114. [Google Scholar]
  90. Luh, R.; Temper, M.; Tjoa, S.; Schrittwieser, S.; Janicke, H. PenQuest: A Gamified Attacker/Defender Meta Model for Cyber Security Assessment and Education. J. Comput. Virol. Hacking Tech. 2020, 16, 19–61. [Google Scholar] [CrossRef] [Green Version]
  91. Malatji, M.; Marnewick, A.; von Solms, S. Validation of a Socio-Technical Management Process for Optimising Cybersecurity Practices. Comput. Secur. 2020, 95, 101846. [Google Scholar] [CrossRef]
  92. Carías, J.F.; Borges, M.R.S.; Labaka, L.; Arrizabalaga, S.; Hernantes, J. Systematic Approach to Cyber Resilience Operationalization in SMEs. IEEE Access 2020, 8, 174200–174221. [Google Scholar] [CrossRef]
  93. AlHogail, A. Design and Validation of Information Security Culture Framework. Comput. Hum. Behav. 2015, 49, 567–575. [Google Scholar] [CrossRef]
  94. da Veiga, A.; Astakhova, L.V.; Botha, A.; Herselman, M. Defining Organisational Information Security Culture—Perspectives from Academia and Industry. Comput. Secur. 2020, 92, 101713. [Google Scholar] [CrossRef]
  95. Sittig, D.F.; Singh, H. A Socio-Technical Approach to Preventing, Mitigating, and Recovering from Ransomware Attacks. Appl. Clin. Inform. 2016, 7, 624–632. [Google Scholar] [CrossRef] [Green Version]
  96. Yigit Ozkan, B.; Spruit, M. Addressing SME Characteristics for Designing Information Security Maturity Models. In Proceedings of the IFIP Advances in Information and Communication Technology, Mytilene, Greece, 8–10 July 2020; Clarke, N., Furnell, S., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 161–174. [Google Scholar]
  97. Kam, H.J.; Menard, P.; Ormond, D.; Crossler, R.E. Cultivating Cybersecurity Learning: An Integration of Self-Determination and Flow. Comput. Secur. 2020, 96, 101875. [Google Scholar] [CrossRef]
  98. Zimmermann, V.; Renaud, K. Moving from a “human-as-Problem” to a “human-as-Solution” Cybersecurity Mindset. Int. J. Hum. Comput. Stud. 2019, 131, 169–187. [Google Scholar] [CrossRef]
  99. Liu, Y.; Sarabi, A.; Zhang, J.; Naghizadeh, P.; Karir, M.; Bailey, M.; Liu, M. Cloudy with a Chance of Breach: Forecasting Cyber Security Incidents. In Proceedings of the 24th {USENIX} Security Symposium ({USENIX} Security 15), Washington, DC, USA, 12–14 August 2015; pp. 1009–1024. [Google Scholar]
  100. Casola, V.; De Benedictis, A.; Rak, M.; Villano, U. Toward the Automation of Threat Modeling and Risk Assessment in IoT Systems. Internet Things 2019, 7, 100056. [Google Scholar] [CrossRef]
  101. Manadhata, P.K.; Wing, J.M. An Attack Surface Metric. IEEE Trans. Softw. Eng. 2011, 37, 371–386. [Google Scholar] [CrossRef]
  102. Alencar Rigon, E.; Merkle Westphall, C.; Ricardo dos Santos, D.; Becker Westphall, C. A Cyclical Evaluation Model of Information Security Maturity. Inf. Manag. Comput. Secur. 2014, 22, 265–278. [Google Scholar] [CrossRef]
  103. Damenu, T.K.; Beaumont, C. Analysing Information Security in a Bank Using Soft Systems Methodology. Inf. Comput. Secur. 2017, 25, 240–258. [Google Scholar] [CrossRef]
  104. Shokouhyar, S.; Panahifar, F.; Karimisefat, A.; Nezafatbakhsh, M. An Information System Risk Assessment Model: A Case Study in Online Banking System. Int. J. Electron. Secur. Digit. Forensics 2018, 10, 39–60. [Google Scholar] [CrossRef]
  105. Depoy, J.; Phelan, J.; Sholander, P.; Smith, B.; Varnado, G.B.; Wyss, G. Risk Assessment for Physical and Cyber Attacks on Critical Infrastructures. In Proceedings of the MILCOM 2005—2005 IEEE Military Communications Conference, Atlantic City, NJ, USA, 17–20 October 2005; Volume 3, pp. 1961–1969. [Google Scholar] [CrossRef]
  106. Hasle, H.; Kristiansen, Y.; Kintel, K.; Snekkenes, E. Measuring Resistance to Social Engineering. In Information Security Practice and Experience; Lecture Notes in Computer Science; Deng, R.H., Bao, F., Pang, H., Zhou, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 132–143. [Google Scholar]
  107. Villarrubia, C.; Fernández-Medina, E.; Piattini, M. Metrics of Password Management Policy. In Proceedings of the Computational Science and Its Applications—ICCSA 2006, Glasgow, UK, 8–11 May 2006; Lecture Notes in Computer Science. Gavrilova, M., Gervasi, O., Kumar, V., Tan, C.J.K., Taniar, D., Laganá, A., Mun, Y., Choo, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1013–1023. [Google Scholar] [CrossRef]
  108. Grunske, L.; Joyce, D. Quantitative Risk-Based Security Prediction for Component-Based Systems with Explicitly Modeled Attack Profiles. J. Syst. Softw. 2008, 81, 1327–1345. [Google Scholar] [CrossRef]
  109. Bojanc, R.; Jerman-Blažič, B.; Tekavčič, M. Managing the Investment in Information Security Technology by Use of a Quantitative Modeling. Inf. Process. Manag. 2012, 48, 1031–1052. [Google Scholar] [CrossRef]
  110. Rantos, K.; Fysarakis, K.; Manifavas, C. How Effective Is Your Security Awareness Program? An Evaluation Methodology. Inf. Secur. J. Glob. Perspect. 2012, 21, 328–345. [Google Scholar] [CrossRef]
  111. Bojanc, R.; Jerman-Blažič, B. A Quantitative Model for Information-Security Risk Management. Eng. Manag. J. 2013, 25, 25–37. [Google Scholar] [CrossRef] [Green Version]
  112. Taubenberger, S.; Jürjens, J.; Yu, Y.; Nuseibeh, B. Resolving Vulnerability Identification Errors Using Security Requirements on Business Process Models. Inf. Manag. Comput. Secur. 2013, 21, 202–223. [Google Scholar] [CrossRef] [Green Version]
  113. Boggs, N.; Du, S.; Stolfo, S.J. Measuring Drive-by Download Defense in Depth. In Proceedings of the International Workshop on Recent Advances in Intrusion Detection, Gothenburg, Sweden, 17–19 September 2014; Lecture Notes in Computer Science. Stavrou, A., Bos, H., Portokalidis, G., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 172–191. [Google Scholar] [CrossRef] [Green Version]
  114. Chen, J.; Pedrycz, W.; Ma, L.; Wang, C. A New Information Security Risk Analysis Method Based on Membership Degree. Kybernetes 2014, 43, 686–698. [Google Scholar] [CrossRef]
  115. Cheng, Y.; Deng, J.; Li, J.; DeLoach, S.A.; Singhal, A.; Ou, X. Metrics of Security. In Cyber Defense and Situational Awareness; Kott, A., Wang, C., Erbacher, R.F., Eds.; Advances in Information Security; Springer International Publishing: Cham, Switzerland, 2014; pp. 263–295. [Google Scholar]
  116. Suhartana, M.; Pardamean, B.; Soewito, B. Modeling of Risk Factors in Determining Network Security Level. J. Secur. Appl. 2014. [Google Scholar] [CrossRef]
  117. Yadav, S.; Dong, T. A Comprehensive Method to Assess Work System Security Risk. Commun. Assoc. Inf. Syst. 2014, 34. [Google Scholar] [CrossRef]
  118. Dehghanimohammadabadi, M.; Bamakan, S.M.H. A Weighted Monte Carlo Simulation Approach to Risk Assessment of Information Security Management System. Int. J. Enterp. Inf. Syst. 2015, 11, 63–78. [Google Scholar]
  119. Juliadotter, N.V.; Choo, K.K.R. CATRA: Conceptual Cloud Attack Taxonomy and Risk Assessment Framework. In The Cloud Security Ecosystem; Syngress: Rockland, ME, USA, 2015. [Google Scholar] [CrossRef]
  120. Otero, A.R. An Information Security Control Assessment Methodology for Organizations’ Financial Information. Int. J. Account. Inf. Syst. 2015, 18, 26–45. [Google Scholar] [CrossRef]
  121. Solic, K.; Ocevcic, H.; Golub, M. The Information Systems’ Security Level Assessment Model Based on an Ontology and Evidential Reasoning Approach. Comput. Secur. 2015, 55, 100–112. [Google Scholar] [CrossRef]
  122. Sugiura, M.; Suwa, H.; Ohta, T. Improving IT Security Through Security Measures: Using Our Game-Theory-Based Model of IT Security Implementation. In Proceedings of the International Conference on Human-Computer Interaction: Design and Evaluation, Los Angeles, CA, USA, 2–7 August 2015; Lecture Notes in Computer Science. Kurosu, M., Ed.; Springer International Publishing: Cham, Switzerland, 2015; pp. 82–95. [Google Scholar] [CrossRef]
  123. Wei, L.; Yong-feng, C.; Ya, L. Information Systems Security Assessment Based on System Dynamics. J. Secur. Appl. 2015. [Google Scholar] [CrossRef] [Green Version]
  124. Brynielsson, J.; Franke, U.; Varga, S. Cyber Situational Awareness Testing. In Combatting Cybercrime and Cyberterrorism: Challenges, Trends and Priorities; Akhgar, B., Brewster, B., Eds.; Advanced Sciences and Technologies for Security Applications; Springer International Publishing: Cham, Switzerland, 2016; pp. 209–233. [Google Scholar]
  125. Granåsen, M.; Andersson, D. Measuring Team Effectiveness in Cyber-Defense Exercises: A Cross-Disciplinary Case Study. Cogn. Technol. Work 2016, 18, 121–143. [Google Scholar] [CrossRef] [Green Version]
  126. Orojloo, H.; Azgomi, M.A. Predicting the Behavior of Attackers and the Consequences of Attacks against Cyber-Physical Systems. Secur. Commun. Netw. 2016, 9, 6111–6136. [Google Scholar] [CrossRef]
  127. Aiba, R.; Hiromatsu, T. Improvement of Verification of a Model Supporting Decision-Making on Information Security Risk Treatment by Using Statistical Data. J. Disaster Res. 2017, 12, 1060–1072. [Google Scholar] [CrossRef]
  128. Alohali, M.; Clarke, N.; Furnell, S. The Design and Evaluation of a User-Centric Information Security Risk Assessment and Response Framework. Int. J. Adv. Comput. Sci. Appl. 2018. [Google Scholar] [CrossRef]
  129. Pramod, D.; Bharathi, S.V. Developing an Information Security Risk Taxonomy and an Assessment Model Using Fuzzy Petri Nets. J. Cases Inf. Technol. 2018, 20. [Google Scholar] [CrossRef]
  130. Rueda, S.; Avila, O. Automating Information Security Risk Assessment for IT Services. In Proceedings of the International Conference on Applied Informatics, Bogota, Colombia, 1–3 November 2018; Communications in Computer and Information Science. Florez, H., Diaz, C., Chavarriaga, J., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 183–197. [Google Scholar] [CrossRef]
  131. Stergiopoulos, G.; Gritzalis, D.; Kouktzoglou, V. Using Formal Distributions for Threat Likelihood Estimation in Cloud-Enabled IT Risk Assessment. Comput. Netw. 2018, 134, 23–45. [Google Scholar] [CrossRef]
  132. You, Y.; Oh, J.; Kim, S.; Lee, K. Advanced Approach to Information Security Management System Utilizing Maturity Models in Critical Infrastructure. KSII Trans. Internet Inf. Syst. 2018, 12, 4995–5014. [Google Scholar]
  133. Akinsanya, O.O.; Papadaki, M.; Sun, L. Towards a Maturity Model for Health-Care Cloud Security (M2HCS). Inf. Comput. Secur. 2019, 28, 321–345. [Google Scholar] [CrossRef]
  134. Bharathi, S.V. Forewarned Is Forearmed: Assessment of IoT Information Security Risks Using Analytic Hierarchy Process. Benchmarking Int. J. 2019, 26, 2443–2467. [Google Scholar] [CrossRef]
  135. Fertig, T.; Schütz, A.E.; Weber, K.; Müller, N.H. Measuring the Impact of E-Learning Platforms on Information Security Awareness. In Proceedings of the International Conference on Learning and Collaboration Technologies, Designing Learning Experiences, Orlando, FL, USA, 26–31 July 2019; Lecture Notes in Computer Science. Zaphiris, P., Ioannou, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 26–37. [Google Scholar] [CrossRef]
  136. Salih, F.I.; Bakar, N.A.A.; Hassan, N.H.; Yahya, F.; Kama, N.; Shah, J. IOT Security Risk Management Model for Healthcare Industry. Malays. J. Comput. Sci. 2019, 131–144. [Google Scholar] [CrossRef]
  137. Wirtz, R.; Heisel, M. Model-Based Risk Analysis and Evaluation Using CORAS and CVSS. In Proceedings of the International Conference on Evaluation of Novel Approaches to Software Engineering, Prague, Czech Republic, 5–6 May 2020; Communications in Computer and Information Science. Damiani, E., Spanoudakis, G., Maciaszek, L.A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 108–134. [Google Scholar] [CrossRef]
  138. Ganin, A.A.; Quach, P.; Panwar, M.; Collier, Z.A.; Keisler, J.M.; Marchese, D.; Linkov, I. Multicriteria Decision Framework for Cybersecurity Risk Assessment and Management. Risk Anal. 2020, 40, 183–199. [Google Scholar] [CrossRef]
Figure 1. Visualisation of the SYMBALS steps as applied in our cybersecurity metric systematic review.
Figure 1. Visualisation of the SYMBALS steps as applied in our cybersecurity metric systematic review.
Applsci 11 06909 g001
Figure 2. The consideration of the five ADKAR factors over the years, based on the 60 inclusions of our systematic review.
Figure 2. The consideration of the five ADKAR factors over the years, based on the 60 inclusions of our systematic review.
Applsci 11 06909 g002
Figure 3. A socio-technical system embedded within an external environment, based on Davis et al. [7].
Figure 3. A socio-technical system embedded within an external environment, based on Davis et al. [7].
Applsci 11 06909 g003
Figure 4. A visualisation of the framework presented in Table 10 using the representation of Figure 3.
Figure 4. A visualisation of the framework presented in Table 10 using the representation of Figure 3.
Applsci 11 06909 g004
Table 1. Existing cybersecurity metric (systematic) reviews. The research focus area is shown, with “generic” indicating research without a specific focus area. We consider social factors to be evaluated when the review covers socio-technical cybersecurity metrics.
Table 1. Existing cybersecurity metric (systematic) reviews. The research focus area is shown, with “generic” indicating research without a specific focus area. We consider social factors to be evaluated when the review covers socio-technical cybersecurity metrics.
ResearchYearFocus AreaSocial Factors Evaluated
Current paper2021Generic
Verendel [29]2009Generic×
Rudolph and Schwarz [30]2012Generic×
Pendleton et al. [17]2016Generic
Cho et al. [18]2019Generic
Husák et al. [31]2019Attack Prediction
Iannacone and Bridges [32]2020Cyber Defense×
Kordy et al. [33]2014Directed Acyclic Graphs×
Cadena et al. [34]2020Incident Management
Knowles et al. [35]2015Industrial Control Systems
Asghar et al. [36]2019Industrial Control Systems
Eckhart et al. [37]2019Industrial Control Systems×
Jing et al. [38]2019Internet Security×
Sengupta et al. [39]2020Moving Target Defense×
Liang and Xiao [40]2013Network Security×
Ramos et al. [41]2017Network Security
Cherdantseva et al. [42]2016SCADA Systems
Morrison et al. [43]2018Software Security×
He et al. [44]2019Unknown Vulnerabilities×
Xie et al. [45]2019Wireless Networks×
Table 2. Various classes of metric aggregation strategies, and important security-related properties their strategies can possess.
Table 2. Various classes of metric aggregation strategies, and important security-related properties their strategies can possess.
AggregationInjectiveIdempotentPrioritisationDependenceWeakest link
Weighted linear combination××
Weighted product×××
Weighted maximum×✓ *×✓ *
Weighted complementary product×✓ *×✓ *
Bayesian network××
* Strategies within the classes of weighted maximum and weighted complementary product cannot satisfy the prioritisation and weakest link properties at the same time.
Table 3. Statistics regarding the different databases used in the search procedure.
Table 3. Statistics regarding the different databases used in the search procedure.
Web of Science78891782
ACM Digital Library2000660
IEEE Xplore20001256
PubMed Central660111
Table 4. The quality criteria applied to 60 papers during the quality assessment phase. Possible responses were strongly disagree (SD), disagree (D), neutral (N), agree (A), or strongly agree (SA).
Table 4. The quality criteria applied to 60 papers during the quality assessment phase. Possible responses were strongly disagree (SD), disagree (D), neutral (N), agree (A), or strongly agree (SA).
ReportingThere is a clear statement of the research aims.0472821
There is an adequate description of the research context.06111726
The paper is based on research.0331638
RigourMetrics used in the study are clearly defined.010191615
Metrics are adequately measured and validated.1242285
The data analysis is sufficiently rigorous.02117148
CredibilityFindings are clearly stated and related to research aims.0819258
Limitations and threats to validity are adequately discussed.3018822
RelevanceThe study is of value to research and/or practice.09122811
Table 5. The ADKAR factors and the related concepts we encountered that were associated with each factor.
Table 5. The ADKAR factors and the related concepts we encountered that were associated with each factor.
ADKARAbbreviationRelated Concepts
DesireDEMotivation, loyalty, attendance
AbilityABBehaviour, capability, capacity, experience, skill
ReinforcementRECulture, education, evaluation, policy, training
Table 6. The various security assessment concepts discussed in research, with an indication of the ADKAR elements covered and the aggregation strategies employed. Each paper should consider at least one ADKAR element. A paper may not aggregate at all, but could also employ several aggregation strategies. Reviews were not labelled with a specific assessment concept.
Table 6. The various security assessment concepts discussed in research, with an indication of the ADKAR elements covered and the aggregation strategies employed. Each paper should consider at least one ADKAR element. A paper may not aggregate at all, but could also employ several aggregation strategies. Reviews were not labelled with a specific assessment concept.
ADKAR ElementsAggregation Strategy Classes
Table 7. The different social viewpoints considered in our inclusions.
Table 7. The different social viewpoints considered in our inclusions.
Social ViewpointTotalAWDEKNABREReal-Life Threat
Table 8. Different aggregation strategy classes and the situations in which they were employed.
Table 8. Different aggregation strategy classes and the situations in which they were employed.
Aggregation StrategyTheoreticalImplementationReview
Table 9. ADKAR and aggregation strategy frequencies of enterprise research and other research.
Table 9. ADKAR and aggregation strategy frequencies of enterprise research and other research.
Application Area
PropertyValuesAny EnterpriseM/L EnterpriseOther
Table 10. Socio-technical cybersecurity framework for SMEs.
Table 10. Socio-technical cybersecurity framework for SMEs.
Socio-Technical Aspects
SME CategoryGoalsPeopleCultureProcessesTechnologyAssets
Start-upsRealise cybersecurity necessity [5] due to external environment factors. Move from a non-existent cybersecurity culture to initial, informal cybersecurity measures [5,10,91].Define training plans and start creating cybersecurity awareness [92].Initial cybersecurity policies and procedures show management commitment, ensuring employee support [93,94].No standardised processes yet [5]. SME gains awareness on cybersecurity policies, processes, procedures, standards and regulation.Employ a threat-based risk assessment tool requiring no knowledge of SME assets, using no/intuitive aggregation. External support needed to understand and implement countermeasures.Understand relevant and critical cybersecurity asset types [92].
Digitally-dependentStart formalising cybersecurity processes. Define, manage, and communicate cybersecurity strategy [5,10,91,92].Continue building awareness [94]. Stimulate desire through knowledge acquisition [93]. Evaluate gaps in ability [92].Management support and cybersecurity trainings stimulate employees [94] and change their perception [93].Formulate basic (reactive) cybersecurity policies, processes, and procedures [5,94]; likely not yet universally applied across business units [5].Employ a threat-based risk assessment tool using no/intuitive aggregation. External support needed to implement countermeasures.Systematically identify and document relevant assets and their baseline configurations [92].
Digitally-basedEstablish a formal cybersecurity programme that facilitates continuous improvement and compliance with regulation [5,10,91,92].Advance cybersecurity knowledge and ability through clearly communicated and documented trainings [5,92,94].Regular communication and education [94], backed by rewards and deterrents [93], ensures secure employee behaviour [93,94].Processes defined and documented proactively, communicated via awareness and training sessions [5,94]. Information sharing agreements defined [92].Use a risk assessment framework or maturity model with adequately motivated aggregation. Implement basic countermeasures [92], external support needed for complex countermeasures.Manage asset changes and periodically maintain assets [92].
Digital enablersEmbed and automate cybersecurity processes [5,10,15,91], which, combined with collaborative stakeholder relationships [92], promote internal and external trust in the SME cybersecurity posture [94].Employees mutually reinforce their cybersecurity abilities, possibly captured in official cybersecurity roles [94].Regular evaluations [5,15] stimulate naturally secure behaviour [94], where national culture and regulations are recognised [93]. An environment of trust with stakeholders exists [92,94].Successive comparisons of assessment results facilitate continuous process improvement [5,15]. Business continuity plan defined and communicated to external stakeholders [92].Use a risk assessment framework or maturity model with advanced aggregation. Independently implement countermeasures [92] and actively detect anomalies [92], with the help of automated tools [5].Identify and document internal and external dependencies of assets, to help in determining the SME attack surface. Actively monitor assets [92].
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

van Haastrecht, M.; Yigit Ozkan, B.; Brinkhuis, M.; Spruit, M. Respite for SMEs: A Systematic Review of Socio-Technical Cybersecurity Metrics. Appl. Sci. 2021, 11, 6909.

AMA Style

van Haastrecht M, Yigit Ozkan B, Brinkhuis M, Spruit M. Respite for SMEs: A Systematic Review of Socio-Technical Cybersecurity Metrics. Applied Sciences. 2021; 11(15):6909.

Chicago/Turabian Style

van Haastrecht, Max, Bilge Yigit Ozkan, Matthieu Brinkhuis, and Marco Spruit. 2021. "Respite for SMEs: A Systematic Review of Socio-Technical Cybersecurity Metrics" Applied Sciences 11, no. 15: 6909.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop