You are currently viewing a new version of our website. To view the old version click .
Systems
  • Article
  • Open Access

17 December 2025

Measuring the Complexity of SysML Models †

,
,
and
Fédération ONERA ISAE-SUPAERO ENAC, Université de Toulouse, 31400 Toulouse, France
*
Author to whom correspondence should be addressed.
This paper is an extended version of our published paper “Quantitative Assessment of Complexity in SysML Models”, which was presented at the 38th annual European Simulation and Modelling Conference (ESM2024), International conference proceedings, held in San Sebastian, Spain, 23–25 October 2024.
This article belongs to the Special Issue Advanced Model-Based Systems Engineering

Abstract

Model-Based Systems Engineering (MBSE) is employing systems analysis, design, and development on models of these systems, bringing together different viewpoints, with a step-by-step increase of detail. As such, it replaces traditional document-centric approaches with a methodology that uses structured domain models for information exchange and system representation throughout the engineering lifecycle. MBSE comprises different languages, each with distinct features and approaches. SysML is a widely used language in MBSE, and many tools exist for it. This paper is interested in the complexity of SysML models, as it may yield useful quantitative indicators to assess and predict the complexity of systems modeled in SysML, and, by extension, the complexity of their subsequent development. Two avenues are explored: objective structural metrics applied to the SysML model and assessment of the team experience. The proposed approach is implemented as a Java prototype. Although simpler models are easier to comprehend and modify, they may fail to capture the full scope of system functionality. Conversely, more complex models, though richer in detail, require greater development effort and pose challenges for maintenance and stakeholder communication. Technical and environmental factors are integrated into the complexity assessment to reflect real-world project conditions. A drone-based image acquisition system serves as a case study.

1. Introduction

The Systems Modeling Language (SysML) is an international standard in OMG [1] and allows systems engineers to model requirements, structure, and behavior of a system. SysML is widely used in industry and academia for modeling complex systems [2,3,4]. Although it provides a powerful representation for, among other features of the system architecture, its flexibility can lead to an increase in the complexity of the system models themselves, making them difficult to manage, understand, and maintain over time.
The increasing complexity of system models, characterized by intricate components, behaviors, interdependencies, and subsystem interactions, has made system engineering more critical than ever. Model-Based Systems Engineering (MBSE) has emerged as a key approach to address system complexity, with SysML being one of the major systems modeling languages [5]. SysML models enable system engineers to capture design, behavior, structure, and performance details in a comprehensive manner. However, the sheer complexity of these models can hinder their visualization, understanding, and practical application [6]. Some challenges include:
  • Challenge of Visualizing Structural Complexity: As systems grow in scale and incorporate numerous interdependent components, subsystems, and interfaces, visualizing their structural complexity within SysML models becomes a significant challenge [7].
  • Impact on System Development Procedures: The complexity embedded within SysML models influences system development procedures and outcomes. Excessive model complexity can lead to inefficiencies in the design process, slowing development cycles, and increasing the likelihood of errors. Balancing the level of complexity while maintaining a comprehensive representation of the system becomes a key consideration to optimize procedural efficiency [8].
  • Quality, Accuracy, and Reliability: The inherent complexity of SysML models directly influences the quality, precision, and reliability of the system they represent [9]. Poorly managed complexity can result in inconsistencies, inaccuracies, and reduced model reliability. Developing methodologies to measure and manage complexity is essential to ensure the robustness of system designs and to improve overall system outcomes.
  • Development Cost: The complexity of SysML models has a direct impact on the cost of system development. Highly complex models can require significant resources, including time, expertise, and computational tools, to develop, maintain, and validate the systems. As the complexity of systems and their models increases, the likelihood of rework, delays, and the need for specialized skills also increases, contributing to higher development costs. In addition, such complexity can result in overlooked issues or errors, leading to expensive downstream fixes [10]. Effective strategies for managing and reducing complexity are therefore crucial not only for technical optimization but also for controlling development budgets and ensuring cost-effective engineering practices [11].
A broader theoretical foundation for understanding model complexity is provided by the work of Efatmaneshnik and Ryan [12], who distinguish between objective and subjective complexity in model-based representations. Objective complexity concerns the inherent structure of a model—its components, interactions, and topological arrangement—while subjective complexity reflects the cognitive effort required by engineers and stakeholders to comprehend, modify, or reason about the model. Their work demonstrates that a usable notion of model complexity must address not only measurable structural properties but also the experiential factors that influence model interpretation and engineering decision-making.
This dual perspective aligns strongly with the motivation of the present work. SysML models simultaneously exhibit structural intricacy (e.g., numerous blocks, states, and interconnections) and cognitive or experiential challenges during design, verification, and communication activities. As models evolve, structural growth often increases subjective difficulty, impacting development speed, error likelihood, and stakeholder engagement. Building on the distinction introduced in [12], this paper proposes a unified assessment framework that integrates both objective structural metrics and team-experience-based indicators. The goal is to operationalize the dual-complexity viewpoint within the context of SysML and Model-Based Systems Engineering, providing system designers with quantitative measures that reflect both the topology of the model and the practical effort required to develop and maintain it.
The aim of this paper is to consider system model complexity (in particular, for SysML models), its importance, and to appreciate how understanding such complexity can be used as an advantage. The idea is to explicitly quantify complexity and understand it, so that design engineers can use this understanding to better master complexity. To this end, complexity metrics are proposed for four SysML diagrams (Requirements diagram, Use Case diagram, Block diagram, and State Machine diagram). Basically, two complementary groups of metrics can be identified: objective structural metrics on the models and more team-experience assessments, such as effort-based measures that account for team experience. The combination of the two groups allows one to better appreciate the development at hand and to guide developers’ design choices, thereby enhancing both technical outcomes and economic efficiency. This paper proposes a framework to assess the complexity of SysML models, comprising:
  • Defining SysML model complexity in terms of metrics (combining both objective metrics and team-experience metrics, taking into accountthe experience of a development team);
  • Analyzing SysML models using these metrics (and a Java-based complexity evaluation tool).
The framework is illustrated with a case study, modeled using TTool1, an open source SysML toolkit for system design that supports SysML modeling and formal verification, allowing early validation of system properties. Java was retained for the development of the evaluation tool itself because of its platform independence, object-oriented nature, and extensive libraries that facilitate rapid development and testing.
The paper is an extended version of [13] and is organized as follows. Section 2 surveys related work on model complexity for several modeling techniques, but not SysML. Section 2.3 then goes more into detail with respect to the complexity of SysML models. Section 3 presents the proposed approach and a SysML Model complexity assessment tool. Section 4 discusses a case study of a drone system model with two variants. Section 5 discusses different complexities of different models and limitations of the complexity metrics. A tool supporting the approach was developed and is presented in Section 6. Section 7 concludes the paper and outlines possible future work.

3. Methodology

This paper builds on the above-mentioned approaches and extends them to Model-Based Systems Engineering. It combines existing complexity metrics and estimation techniques, such as Use Case Points (UCPs), structural complexity measures, and interaction-based complexity measures, to develop a comprehensive framework for assessing system model complexity. This approach is designed to address the challenges of modeling system complexity, ensuring accurate estimation and efficient project management.

3.1. The Proposed Methodology

This methodology incorporates both objective and team-experience elements in assessing the complexity of the SysML model. Structural complexity analysis is objective, as it relies on measurable system topology metrics such as adjacency matrices and cyclomatic complexity. In contrast, the Technical Complexity Factor (TCF) and the Environmental Complexity Factor (ECF) involve team experience assessments based on expert judgment and contextual considerations. Similarly, Unadjusted Use Case Weight (UUCW) and Unadjusted Actor Weight (UAW) depend on team-experience classification of use cases and actors into predefined complexity categories. Recognizing this mix of objectivity and subjectivity is essential for interpreting complexity estimates and ensuring balanced decision-making in project management. The approach or the methodology can be seen in Figure 1.
Figure 1. Proposed methodology.
  • Step 1: Requirements Capture and Use Case Categorization. To initiate the complexity assessment, begin by capturing system requirements thoroughly using Use Case Modeling. This involves the following:
    • The process begins by thoroughly gathering the system’s requirements through Use Case modeling. This involves eliciting the functional behaviors of the system and documenting them in a structured manner.
    • Use Case Classification:
      Identification: Each use case is identified from the requirements.
      Categorization: Use cases are classified into three levels, Simple, Average, and Complex, based on criteria such as the number of transactions or the intrinsic difficulty of the task.
      Weighting: A predetermined weight is assigned to each category. For example, a simple use case may receive a lower weight than a complex one.
    • Actor Classification:
      Identification: Actors (i.e., the external entities interacting with the system) are identified.
      Categorization: Actors are similarly classified as Simple, Average, or Complex, based on the nature and extent of their interactions with the system.
      Weighting: Predefined weights are also assigned to these categories.
    • Computation of Functional Size:
      Unadjusted Use Case Weight (UUCW): Calculated as the sum of (number of use cases × corresponding weight) across all categories.
      Unadjusted Actor Weight (UAW). Similarly, computed by summing the products of actor counts and their assigned weights.
Please note that for the weights for the UUCW calculation, the following definitions were taken from the original work:
  • Simple use case (Weight = 5): Involves one to three transactions. These are straightforward functions with limited interaction steps. For example, “system starts up” or “system performs self-check.”
  • Average use case (Weight = 10): Involves four to seven transactions. These require moderate modeling effort, often involving branching behavior or multiple interaction steps. For example, “system processes operator command and generates a status report.”
  • Complex use case (Weight = 15): Involves eight or more transactions. These represent elaborate operational scenarios, often with multiple interactions, conditions, and alternative flows. For example, “the system executes a mission sequence involving multiple subsystems and external entities.”
Similarly, for the AUW calculation, the following weights were defined as follows:
  • Simple actor (Weight = 1): An external component or subsystem with a single, well-defined mode of interaction. For example, a sensor providing a fixed data feed.
  • Average actor (Weight = 2): An external system or subsystem that interacts in multiple structured ways, such as exchanging commands and status information. For example, a ground station that exchanges telemetry and command data with a satellite.
  • Complex actor (Weight = 3): Typically a human operator or a system with rich and variable interactions that requires detailed modeling of roles, goals, and interactions. For example, a pilot interacting with an aircraft system or a mission controller coordinating across multiple subsystems.
Together, UUCW and UAW provide an initial unadjusted estimate of the functional size of the system.
  • Step 2: Technical and Environmental Adjustments. The raw functional size must be refined to account for the challenges of real-world development. Two sets of adjustment factors are introduced:
    • Technical Complexity Factor (TCF):
      Assessment: Thirteen technical attributes are evaluated, such as system distribution, performance objectives, ease of use, and maintenance (see Table 1). Each factor is rated on a scale (typically 0 to 5) based on its relevance and impact.
      Aggregation: The scores are multiplied by their corresponding weights and summed to yield a total technical factor (TF).
      Adjustment Formula: The TCF is then computed using the formula:
      T C F = 0.6 + T F 100
      This multiplier adjusts the functional size to account for technical challenges that may affect the development effort.
    • Environmental Complexity Factor (ECF):
      Assessment: Eight environmental factors, such as team experience, familiarity with the development process, stability of requirements, and other contextual conditions, are similarly evaluated (see Table 2).
      Aggregation: Each environmental factor is rated and weighted, and the aggregate forms the environmental factor (EF). The ECF is computed as follows:
      E C F = 1.4 + ( 0.03 × E F )
      This factor reflects the external conditions under which the system is developed and may mitigate or exacerbate the effective complexity.
Table 1. Technical complexity factors [26].
Table 1. Technical complexity factors [26].
FactorDescriptionWeight
T1Distributed system2.0
T2Response time / performance objectives1.0
T3End-user efficiency1.0
T4Internal processing complexity1.0
T5Code reusability1.0
T6Easy to install0.5
T7Easy to use0.5
T8Portability to other platforms2.0
T9System maintenance1.0
T10Concurrent / parallel processing1.0
T11Security features1.0
T12Access for third parties1.0
T13End user training1.0
  • Step 3: Calculation of Use Case Points (UCPs). The UCP is calculated by combining UUCW, UAW, TCF, and ECF:
    U C P = ( U U C W + U A W ) × T C F × E C F
This provides an adjusted estimation of the system’s size and the effort required.
  • Step 4: Effort Estimation. Finally, the estimated development effort is calculated based on UCP and approximate hours per Use Case Point, aiming at providing a better insight through balancing structural and environmental complexities. At this point, the time spent on analyzing and modeling the project requirements and the project team members’ skills will be assessed, yielding more subjective assessments that will impact development estimates. To address the time spent on the analysis and modeling of the requirements of the project, information on the number of hours worked is collected, and a complexity index ranging from zero to five is assigned based on the reported data (Table 3), see also [22].
Table 2. Environmental complexity factors [26].
Table 2. Environmental complexity factors [26].
FactorDescriptionWeight
E1Familiarity with development process used1.5
E2Application experience0.5
E3Object-oriented experience of team1.0
E4Lead analyst capability0.5
E5Motivation of the team1.0
E6Stability of requirements2.0
E7Part-time staff−1.0
E8Difficult programming language−1.0
Table 3. Complexity metric based on hours required [22].
Table 3. Complexity metric based on hours required [22].
Hours Required1–89–4041–160161–320above 320
Complexity Metric12345
To address the skills of the project team members, skill level assessments are performed and a value is assigned (Table 4); see also [22] for software engineering projects.
Table 4. Skill of project members [22].
Table 4. Skill of project members [22].
         Skill LevelValue
         Highly competent1
         Competent2
         Short tenure3
         Experienced but new4
         No experience5
  • Step 5: Structural Complexity Analysis. Beyond effort estimation, the methodology rigorously examines the intrinsic structural complexity of the SysML model:
    • Adjacency Matrix-Based Complexity:
      Representation: The model components are represented as nodes, with their interactions captured in an adjacency matrix.
      Calculation: A complexity score is computed by summing the individual complexities of the components α i and the complexities of their pairwise interactions β i j , weighted by the entries in the adjacency matrix. The graph energy, E ( A ) , of the system further modulates this sum to reflect the overall inter-connectivity.
      C ( n , m , A ) = i = 1 n α i + ( i = 1 n j = 1 n β i j A i j ) E ( A ) n
      where E ( A ) reflects the graph energy of the system.
    • Cyclomatic Complexity:
      Metric Origin: Borrowed from software engineering, cyclomatic complexity measures the number of independent paths through the control flow of the system.
      Formula: v = e n + 2 p where e is the number of relations (edges), n is the number of components (nodes), and p is the number of connected internal components (or exit points).
      Implications: A higher cyclomatic complexity indicates more potential paths and a more intricate structure, which can impact the maintainability and testability.
      v = e n + 2 p
      where ( e ) is the number of relations, ( n ) is the number of sub-components, and ( p ) is the number of connected internal components.
    • Composite Complexity Metric:
      Integration: To provide a holistic view, a composite metric combines effort-based (UCP-derived) and structural measures.
      C = i = 1 n α i + ( i = 1 n j = 1 n β i j + i = 1 n m = 1 n γ i m ) v n
      Interpretation: This composite metric reflects both the internal structural intricacies and the external interactions, offering a comprehensive measure of model complexity.
By systematically capturing functional requirements, adjusting for technical and environmental conditions, and rigorously analyzing structural attributes, this methodology provides a detailed and nuanced evaluation of the complexity of the SysML model. Integration of team-experience judgment with objective measurements ensures that the model’s size, inter-connectivity, and inherent structural challenges are all accounted for, thereby supporting better system design, improved maintainability, and more accurate effort estimation.
As shown in Figure 1, the first step is to capture the requirements and categorize the Use Cases (and Actors) into simple, average, or complex. Then, along with Unadjusted Use Case Weight and Unadjusted Actor Weight, calculate the Technical and Environmental Factors to compute the Use Case Points (UCPs). Once the UCP is found, the estimated effort to build the system model can be determined. Furthermore, the complexities of Block and State Machine Diagrams are to be computed.
This section outlined a structured methodology for assessing the complexity of SysML models by integrating Use Case Points, structural complexity metrics, and interaction-based complexity metrics. Although this approach provides a comprehensive framework for evaluating system complexity and effort estimation, manual execution of these steps can be time-consuming and prone to inconsistencies. To improve efficiency and accuracy, the following section explores the automation of this methodology, leveraging computational techniques to streamline complexity assessment and improve project management outcomes.

3.2. Prototyping a SysML Model Complexity Evaluation Tool

The presented methodology offers a robust framework for complexity assessment. For small systems with few models, this approach can easily be applied and can give valuable insight into the complexity of the models. However, with the growing complexity of the systems themselves, manual execution of such calculations can be time-consuming and is prone to human error. Therefore, in order to ensure efficiency and consistency, an automated complexity computation tool was developed that supports the presented methodology. The tool was designed to parse SysML models from the open-source MBSE tool TTool and assess the complexity based on the metrics defined in the previous section. The developed tool extracts all the data from the source file of the MBSE tool. In the case of TTOOL, the models are saved in the XML file format. A Java application was created that takes an XML file of a model as input and performs all the necessary evaluations as explained before. As the XML file contains all the information of the models, the user will only be asked to give the necessary inputs whenever required, according to the evaluation metrics.
In the first step, robust extraction and parsing of SysML model data from XML files are performed. The XML file contains distinct sections corresponding to various SysML diagrams (Requirements, Use Case, Block, and State Machine diagrams), and dedicated Java classes were developed to manage the extraction process, tailored to a specific diagram type. The following outputs were defined for the three mentioned SysML diagrams:
  • For State Machine Diagrams, the output consists of the diagram name, the number of states and connectors, and any time delay statistics, which are later integrated into the overall complexity metrics.
  • For the Block Diagrams, the collected information includes the block name and the quantitative measures (e.g., the number of signals) that contribute to the calculation of structural complexity.
  • For the Use Case Diagrams, the results comprise the computation of the Unadjusted Use Case Weight (UUCW) and the Unadjusted Actor Weight (UAW), which are critical inputs for estimating effort and overall model complexity.
Then, a central integration point was developed where all parsed data converge. This specific Java class:
  • Aggregates Data—it collates outputs from the State Machine Diagrams, Block Diagrams, and Use Case classes.
  • User Input Integration—in addition to the parsed structural data, the class prompts the user for supplementary inputs—such as time spent on requirements and team skill level—which affect requirement complexity.
  • Computational Processing—using predefined formulas (e.g., those incorporating cyclomatic complexity, adjacency matrix-based metrics, and use case points), the Complexity class computes a comprehensive complexity score for the model.
  • Result Presentation—finally, the computed metrics are displayed to the user via a graphical interface, allowing for further analysis or model comparison.
And some error handling mechanisms were put in place so as to cover for the following:
  • Missing Data—handling cases where expected elements are absent, therefore aiming to maintain overall analysis integrity.
  • User Guidance—prompting users and proposing fallback options to ensure that users can resolve issues without disrupting the workflow.
The output of a complexity analysis provides insight into different components. Once all factors presented in the previous sections have been collated, an overall complexity is determined by summing the cyclomatic and temporal complexity terms.

4. Case Study

Modern systems, such as drones, require meticulous design and evaluation to balance functionality, reliability, and usability. SysML provides a structured framework for modeling these complex systems, enabling efficient communication, visualization, and decision-making throughout the development lifecycle. A drone in charge of taking pictures serves as a case study for this research. The drone’s mission is to enable its users to capture aerial images of locations specified by their coordinates, without requiring user intervention during flight.
The case study models a small autonomous drone whose mission is to fly to GPS coordinates supplied by a ground operator, take aerial photos, store them locally, and (optionally) transmit them back—all without human intervention during flight. The paper builds three incremental SysML models (M1 → M2 → M3), each adding features and therefore increasing modeling and development complexity.
The study applies both team-experience metrics (Use Case Points, TCF, ECF) and structural metrics (adjacency/cyclomatic-like measures applied to Block and State Machine diagrams) and shows how those produce consistent complexity and effort estimates as the system grows. The following sections present the description of various picture-taking drone models:

4.1. First, Basic Model (M1)

The first model (M1) models a drone whose coordinates are obtained via a GNSS device. The autopilot instructs the drone to a location, the camera driver takes the picture, and the picture processing manager sends a signal to the compact flash driver to save it.

4.1.1. Requirement Diagram

The requirement diagram for the M1 model is shown in Figure 2.
Figure 2. Requirement diagram for M1.
The complexity of the requirement diagram in Figure 2 depends on the time spent on the project and the skill level of the team that works on the model.
(i)
Number of Hours Spent on the Model: As stated in the previous sections, the number of hours spent on the model has to be specified by the user. Then according to Table 3, the response is given a complexity index.
The number of hours spent on the requirements of this model is between 0 and 8 h. It corresponds to complexity index 1.
(ii)
Skill of the Project Members: The users are then asked to rate their skill level based on the options given in Table 4, and a specific complexity index is assigned accordingly.
The skill level of the professionals working on the requirements is an employee who has been with the company a short period of time, has experience in 1–2 similar projects. It corresponds to the complexity index of 3.
The complexity is the product of both indices assigned earlier. It may seem lower compared to the complexity numbers of other sections of the model, but it is imperative to understand that it applies only to requirements and ranges between 1 and 25, with 1 representing less time-consuming tasks handled by highly skilled professionals and 25 representing highly time-consuming tasks managed by less skilled professionals.
In total, the complexity of the requirements of the M1 Model becomes 1 × 3 = 3.

4.1.2. Use Case Diagram

For each of the Use Case diagrams, the concept of Use Case Points [24] has been applied. Further, with the Use Case Points, the estimated effort for a project can be calculated, which helps determine the total number of hours expected on the project. This would help inform decision-making and enable efficient project management. The classification of Unadjusted Use Case Weight (UUCW) and Unadjusted Actor Weight (UAW) introduces subjectivity into the complexity assessment process. The complexity metrics are not inherently subjective; rather, they are a quantitative representation of a subjective classification. The Use Case Point (UCP) method turns a qualitative judgment into a quantifiable data point. Although the weights for each complexity category—Simple (5), Average (10), and Complex (15)—are fixed and objective, the initial decision to place a use case into one of these categories is a human-based evaluation. This judgment is influenced by the evaluator’s interpretation of factors like the number of transactions and the perceived intricacy of the underlying system logic, introducing an element of subjectivity at the input stage. Therefore, the metrics themselves are objective, but they are derived from a process rooted in subjective professional opinion.
The Use Case diagram of the model is shown in Figure 3.
Figure 3. Use case diagram of the M1 Model.
(i)
Unadjusted Use Case Weight (UUCW): As stated above in the Related Work section, the user is required to identify the use cases as simple, average, and/or complex use cases depending on how many transactions each use case has. There is a universally set assigned value given to each of these categories as shown in Table 5:
Table 5. Use Case classification—M1 Model.
Since all use cases have at most two transactions, they can be classified as Simple. However, one may classify use cases on the basis of how complex they really are, regardless of the number of transactions. The computation for this Use Case diagram can be conducted as follows (Table 6):
UUCW = (Total No. of Simple Use Case × 5) + (Total No. Average Use Cases × 10) + (Total No. Complex Use Cases × 15)
UUCW = (3 × 5) + (1 × 10) + (0 × 15) = 15 + 10 = 25
Table 6. Use Case classification.
(ii)
Unadjusted Actor Weight (UAW): The user is required to identify the actors as simple, average, and/or complex actors. There is a universally assigned value given to each of these categories. Simple is assigned 1; Average is assigned 2; and Complex is assigned 3.
For the M1 Model, the actors have been classified as shown in the table. The computation for this Use Case diagram can be done as follows (Table 7):
UAW = (Total No. of Simple actors × 1) + (Total No. Average actors × 2) + (Total No. Complex actors × 3)
UAW = (1 × 1) + (2 × 2) + (2 × 3) = 1 + 4 + 6 = 11
Table 7. Actor Classification—M1 Model.
(iii)
Technical Complexity Factor (TCF): TCF is a factor which is needed to estimate the size of the system by taking into considerations the technical factors (TF). To calculate TF, a value is assigned to each of the technical factors based on how essential the technical aspect is to the system being developed (see Table 8). Then, the weighted and assigned values are multiplied, and the total TF is determined.
Table 8. Technical complexity factor.
Now, the TCF is calculated using the TF, and the formula is as follows:
T C F = 0.6 + ( T F 100 ) = 0.6 + ( 39 100 ) = 0.99
(iv)
Environmental Complexity Factor (ECF): The size of the system should also be estimated, taking into account the environmental factors as well. Based on the level of team experience, the values are assigned to the Environmental Factors whose weights have already been established (see Table 9).
Table 9. Environmental complexity factor.
Now, the ECF is calculated using the EF and the following formula:
E C F = 1.4 + ( 0.03 E F ) = 1.4 + ( 0.03 15.5 ) = 0.935
Now, Use Case Points (UCPs) can be calculated by substituting the values of UUCW, UAW, TCF, and ECF as follows:
U C P = ( U U C W + U A W ) × T C F × E C F = 33.3234
Estimated Effort: Assuming that each Use Case Point takes up 20 man hours, the total effort for the project can be estimated as follows: Estimated Effort = Use Case Point × Man hours/Use Case Point = 666.468 h (666 h, 7 min, and 48 s).

4.1.3. Block and State Machine Diagrams

In system modeling, Block and State Machine diagrams play complementary roles, specifically in TTool. Block diagrams capture the structural organization of system components, including their attributes, operations, and communication interfaces. On the other hand, State Machine diagrams model the dynamic behavior of system components by defining their states and transitions. Together, they provide a comprehensive view of both the static architecture and the dynamic operation of the system.
The complexity computation for the M1 model is objective, as it is based on measurable system components and defined structural relationships. The model represents a functional breakdown of the drone, where each component and its interactions are clearly specified. The application of complexity metrics to block and state machine diagrams follows a systematic and quantifiable approach, ensuring consistency in the assessment. Since the computation relies on predefined formulas and measurable system attributes rather than interpretation by team experience, the resulting complexity values provide an objective representation of the structural intricacies of the model.
When applying the complexity metrics, the complexity of the block and state machine diagrams was successfully computed and can be seen in Table 10. The formula used is the following: C = i = 1 n α i + ( i = 1 n j = 1 n β i j + i = 1 n m = 1 n γ i m ) v n .
Table 10. Complexity of the M1 Model.

4.2. Extended Model (M2)

The previous section discussed the modeling and analysis of the elementary model M1 (see Figure 4). The current section applies the same approach to the Extended Model M2 and presents the specifics of M2 without reaching the level of detail used for M1 in the previous section. This Extended Model M2 adds an Obstacle Avoidance System to the M1 Model. This system integrates sensors to detect obstacles during flight and sends signals to the autopilot to initiate maneuvering when necessary. Figure 5 depicts the requirements for M2.
Figure 4. Block diagram of the M1 Model.
Figure 5. Extended model requirements.
(i)
Time Spent on Project: the TSP for M2 is lower than the TSP for M1. Since the Extended Model was modeled from scratch, the time spent on requirements of this model is between 8–40 h. It corresponds to the complexity index 2.
(ii)
Skill of the Project Members: From Table 4, the user rates their skill level and assigns a specific complexity index accordingly.
The skill level of the professionals working on the requirements is ‘Employee who has been with the company for a short length of time, has experience in 1–2 similar projects.’ It is the same as the previous model, assuming that the same professionals are working on it. It corresponds to the complexity index of 3.
Therefore, the complexity of the requirements of the Extended Model = 2 × 3 = 6.

4.3. Advanced Model (M3)

Compared to the M2 model, this final model incorporates a battery management system that monitors the camera’s battery status. If the battery level falls below a threshold, a signal is sent to the picture processing manager, and the battery must be charged. This extra component gives the model a little more modeling complexity while giving it new advanced features.

Requirement Diagram

Requirement complexity is team experience, as it depends on the judgment of the user about the time spent and the skill level, leading to variations in the assessment. The requirements for the Advanced Model are as follows (Figure 6).
Figure 6. Advanced model requirements.
The approach applied to M1 and M2 is now applied to M3.
  • Time Spent on Project: between 8–40 h (complexity index 2).
  • Skill of the Project Members = 6 (complexity index of 3) = 66.6468.

5. Discussion

The basic model (M1) exhibits the lowest measured complexity with a structural–behavioral score of 147.50 and a Use Case Point–derived effort estimate of approximately 666.47 person-hours. Structurally, M1 is characterized by a smaller number of blocks and a lower number of inter-block interfaces; behaviorally, it contains fewer state machines, fewer states per machine, and a lower sum of transitions, which produces a smaller summed cyclomatic value. These objective properties translate into a reduced number of independent execution paths to verify, fewer interaction points at which integration faults can occur, and, correspondingly, lower cognitive load for reviewers and integrators. In practical engineering terms, M1’s low interface multiplicity and limited state branching make it more tractable for unit-level verification, simpler to instrument for test coverage, and less prone to subtle cross-component integration defects—but it also represents a narrower functional envelope and therefore provides less coverage of real operational scenarios.
The intermediate model (M2) adds an Obstacle Avoidance subsystem and its associated sensing and signaling interfaces. Measured complexity for M2 rises to 196.50, and the UCP effort estimate increases to about 999.70 person-hours. Objectively, M2 increases both interface density and behavioral branching: new sensor-to-processor and sensor-to-autopilot connectors increase adjacency matrix non-zeros, while the reactive control logic adds states and transitions that substantially raise per-machine cyclomatic sums. Numerically, the structural score increases by 49.00 points from M1 to M2, a relative increase of ≈33.22 percent, and the estimated effort increases by 333.23 h (≈49.99 percent increase). This relatively large jump reflects the fact that reactive obstacle handling tends to multiply interaction paths (sensing → detection → decision → maneuver → resume) and introduces real-time and safety constraints that require additional validation scenarios. From a design perspective, M2 therefore represents an important capability gain at the cost of a significant rise in both verification surface and integration complexity; it is the point at which modeling effort shifts from primarily static decomposition to an emphasis on dynamic, safety-critical behavior.
The advanced model (M3) further augments M2 with a Battery Management System that coordinates charging/charging-complete states with the camera and other subsystems. M3’s complexity reaches 227.83 (see Figure 7), and its effort estimate is approximately 1332.94 person-hours. Structurally, the addition increases coupling and adds some new interfaces, and behaviorally, the model acquires lifecycle coordination states that span components (e.g., CameraDriver—BMS coordination). Compared to M2, the structural score increases by 31.33 points (≈15.94 percent relative increase) and the estimated effort increases by 333.236 h (≈33.33 percent relative increase). The smaller relative structural increment (M2 to M3) compared to the initial increment (M1 to M2) suggests that while BMS introduces cross-component coordination, it does not produce as many distinct new interaction paths as obstacle avoidance did; nevertheless, the absolute additional effort remains substantial because lifecycle coordination often demands end-to-end scenario validation, new fault modes, and integration testing with power management constraints.
Figure 7. Complexity of M3 computed using the prototype.
These empirical patterns inform concrete engineering practices. First, complexity measures should be used to identify high-impact subsystems (those that contribute disproportionately to summed cyclomatic value). For such hotspots, teams should apply modularization strategies and well-defined interface contracts to reduce coupling and to permit isolated verification. Second, per-state-machine cyclomatic contributions should drive test-case prioritization: machines with the largest contribution to the summed cyclomatic metric should receive early, focused test generation and path-coverage analysis because they represent the greatest combinatorial verification risk. Third, incremental integration and hardware-in-the-loop simulation are especially valuable when adding reactive capabilities (M2-type changes): early, automated integration tests that exercise sensing→decision→actuation chains will reveal cross-component timing or protocol faults before full system integration. Fourth, for lifecycle-coordination features (M3-type changes), regression suites that include resource-state scenarios (e.g., low power, charging interrupt, graceful resume) are required to validate recovery and interlock behavior.
Finally, the prototype extractor and analyzer tool developed for this work (the standalone Java tool that imports SysML XML files) makes these recommendations operational: by producing per-component adjacency measures, per-state-machine cyclomatic values, and composite complexity indices, it enables targeted refinement of the model and empirically driven calibration of prediction weights. In practice, teams can use the tool to run sensitivity analyses (e.g., removing or isolating a subsystem in the model and measuring the change in adjacency and cyclomatic sums) to quantify the marginal complexity cost of candidate features. Normalizing sub-scores and calibrating aggregation weights against historical projects will improve the predictive power of the tool, which suggests different remediation strategies for different kinds of models.
In general, complexity analysis provides a team experience and an objective assessment of the modeled systems. This would ultimately give system designers and system engineers a much better understanding of the system, enabling better usage of the MBSE approach. It helps in identifying potential areas of concern for system designers in terms of modeling and scalability based on the complexity number, and helps them make informed decisions regarding the system design, optimization, and verification as intended.
The results of the case study further illustrate the relevance of the dual-complexity viewpoint introduced by Efatmaneshnik and Ryan [12]. Increases in structural complexity—visible through growth in adjacency metrics, cyclomatic values, and block interactions—correspond closely to increases in experiential or perceived complexity reported by the development team. This relationship reflects the interplay between objective and subjective dimensions of complexity, wherein richer models provide deeper behavioral and architectural fidelity but also impose higher cognitive demands on stakeholders responsible for model comprehension and refinement. By combining structural complexity metrics with team-experience factors (TCF, ECF, UUCW, UAW), the methodology proposed in this paper provides a practical operationalization of this dual perspective. Structural indicators quantify the inherent complexity of the model, whereas experiential indicators approximate the subjective difficulty of model development. This integration allows system designers to not only assess how complex a model is but also anticipate how difficult it will be to work with that model throughout the engineering lifecycle.
The increasing complexity of modern systems presents significant challenges in their visualization, communication, and development when using SysML. Addressing these challenges requires a multifaceted approach—balancing model complexity, improving stakeholder engagement, optimizing SysML tool utilization, and ensuring model quality and reliability.

6. Development of a Tool Supporting This Approach

An automated complexity analysis prototype tool was implemented as a standalone Java application that imports SysML model XML files, specifically those produced by TTool, rather than as a tightly coupled plugin. The prototype performs two primary workflow analyses of a single model and pairwise comparison of two models, and is driven by a lightweight graphical front-end that prompts the user to select XML input files and answer a small set of questions about requirements and use-case characteristics. The tool reads the canonical XML representation of the model and extracts the information needed to compute both structural and experience-based metrics, enabling automated repeatable measurement runs without modifying the modeling environment.
The software’s XML parsing uses standard Java APIs (DocumentBuilder/Document) and the extraction logic is encapsulated in three purpose-built extractor classes. The BlockDiagram extractor iterates over <COMPONENT> elements to collect block names, attribute and method counts, and associated signal lists, storing results in a Map <String, BlockInfo> for downstream analysis. The SMD extractor scans <StateMachineDiagramPanel> elements to identify state-machine panels, count states (detected by name prefixes such as “state0” or “state”) and connectors, and build a StateMachineInfo map representing each state machine’s topology. The UseCase extractor processes <UseCaseDiagramPanel> entries to recover actors, use cases, and connector metadata. All extractor routines are implemented as public static methods so that they can be invoked programmatically from other applications or integrated into larger toolchains.
A central Complexity class coordinates the user workflow, presenting a simple Swing UI (a JFrame and a JFileChooser) to select modes and input files, invokes the extractor methods, and aggregates the extracted metrics. The compute pipeline calculates cyclomatic-derived measures for each state machine and sums these to produce a structural baseline; it also collects block-level statistics (attributes, operations, signals, connectors) and uses interactive inputs to produce requirement-driven estimates (requirementComplexity and estimatedComplexity). The results are displayed to the user in dialog windows, while intermediate outputs—block and state machine maps and console logs—are retained for traceability and programmatic reuse. When run in comparison mode, the same pipeline is executed for two XML files, and the tool presents a side-by-side comparison of complexity indicators and effort estimates.
Because the prototype imports XML rather than depending on an internal TTool API, it functions as a connector that could be readily applicable to models produced by other SysML tools that support comparable exchange formats. This external adapter-based architecture reduces adoption friction, facilitates continuous integration or batch analysis workflows, and simplifies the development of additional adapters or wrappers to integrate the analyzer into diverse toolchains. Returned maps and public extractor methods make it easy to embed the parser and computation engine within automated pipelines or to expose its output to downstream analysis, visualization, or calibration routines.
In operation, the prototype thus bridges diagram-level model artifacts and composite complexity assessment: cyclomatic metrics computed per state machine are combined with block-adjacency and use-case/requirement indicators to produce an overall complexity and effort estimate. The implementation approach—standalone, XML-driven, and programmatically accessible—enables broad, multitool usage and provides a practical foundation for empirical calibration, weight tuning, and extension to additional structural patterns (for example, concurrency or hierarchical states) in future work.

7. Conclusions

The challenges outlined above demonstrate the delicate balance required when managing SysML model complexity. Overly simplistic models risk omitting critical details, while excessively complex models hinder visualization and stakeholder engagement. Strategies such as hierarchical decomposition, modularity, and abstraction are essential to balance these competing needs. In addition, providing training and resources to improve stakeholder familiarity with SysML tools and practices can significantly enhance comprehension and application.
To improve quality and reliability, quantitative metrics can be developed to measure complexity. Such measures could help system engineers identify and address cases where models become overly intricate or ambiguous, thereby ensuring greater consistency and accuracy.
The purpose of this paper was to conduct a thorough study on the complexity of a system and understand how having the knowledge of complexity can be used as an advantage in the field of systems engineering. The complexity of a system was to be quantified so that it could be understood by design engineers and used as a means to build system models. Complexity metrics were defined for four SysML diagrams and applied to SysML models developed in TTool. The study was illustrated by analyzing a drone case study.
Future research on SysML model complexity could explore integrating dynamic complexity analysis with model validation to enhance model reliability and usability. More specifically, this could entail the following:
  • Developing standardized methodologies and tools to manage complexity effectively;
  • The transition from SysML v1 to SysML v2 opens new opportunities for complexity analysis [31]. SysML v2 introduces a more robust and expressive syntax and semantics, enabling improved modeling precision and interoperability. Future research could explore how these enhancements impact complexity management, providing new methods to quantify and analyze the complexity of SysML v2 models. This could include developing tools to assess the impact of modular architecture and the improved APIs introduced in SysML v2 on system design and complexity [32].
  • The dynamic aspects of SysML models must be incorporated. This could involve studying how model complexity evolves over time, particularly in response to changes in system behavior or environmental conditions. The integration of complexity analysis techniques with model validation and verification processes would help to enhance the reliability and quality of SysML models. This could involve identifying correlations between model complexity metrics and the likelihood of errors or inconsistencies. Furthermore, SysML v2’s enhanced features for representing system behaviors and interactions could facilitate more comprehensive dynamic complexity analyses.
  • The role of human factors could also be explored, such as cognitive load and decision-making biases, could also be explored in managing complexity within SysML models. SysML v2’s improved expressiveness and usability might mitigate some of the challenges associated with human interpretation and cognitive strain, offering fertile ground for research on user-centric complexity metrics and visualizations.
  • Furthermore, complexity analysis capabilities can be integrated directly into other existing SysML modeling tools to streamline the complexity management process. This would allow engineers to conduct real-time complexity analysis as they develop and refine their models. For SysML v2, embedding such capabilities into tools that leverage its modern APIs and integration potential could significantly improve efficiency, enabling seamless complexity monitoring and adaptive modeling strategies.
  • The evolution of SysML v2 represents a pivotal moment for the field, offering a platform not only to extend existing complexity management approaches but also to develop entirely new methods tailored to its advanced capabilities. By addressing these areas, researchers can help ensure that SysML remains an effective and scalable tool for managing system complexity in increasingly sophisticated engineering environments.

Author Contributions

Conceptualization, all authors; methodology, all authors; software, A.B. and L.B.G.; validation, all authors; writing—original draft preparation, all authors; writing—review and editing, all authors; supervision, P.d.S.-S. and R.A.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ECFEnvironmental Complexity Factor
EFEnvironmental Factor
GPSGlobal Positioning System
GNSSGlobal Navigation Satellite System
MBSEModel-Based Systems Engineering
OMGObject Modelling Group
SysMLSystem Modelling Language
TCFTechnical Complexity Factor
TFTechnical Factor
UAWUnadjusted Actor Weight
UCPUse Case Point
UMLUnified Modelling Language
UUCWUnadjusted Use Case Weight
XMLExtensible Markup Language

Note

1
https://ttool.telecom-paris.fr/ (accessed on 9 December 2025).

References

  1. OMG. OMG Systems Modeling Language; Object Management Group: Milford, MA, USA, 2017; Available online: https://www.omg.org/spec/SysML/1.5 (accessed on 9 December 2025).
  2. Wolny, S.; Mazak, A.; Carpella, C.; Geist, V.; Wimmer, M. Thirteen Years of SysML: A Systematic mapping Study. Softw. Syst. Model. 2020, 19, 111–169. [Google Scholar] [CrossRef]
  3. Santos, T.; Soares, M. A Qualitative Study on SysML Based on Perceived Views from Industry Professionals. In Computational Science and Its Applications; LNCS: Ghaziabad, India, 2021; pp. 299–310. [Google Scholar] [CrossRef]
  4. de Saqui-Sannes, P.; Vingerhoeds, R.A.; Garion, C.; Thirioux, X. A Taxonomy of MBSE Approaches by Languages, Tools and Methods. IEEE Access 2022, 10, 120936–120950. [Google Scholar] [CrossRef]
  5. Friedenthal, S.; Moore, A.; Steiner, R. A Practical Guide to SysML: The Systems Modeling Language; Morgan Kaufmann: Burlington, MA, USA, 2014. [Google Scholar]
  6. Umeda, H.; Takatsuki, S.; Kobayashi, T.; Ueda, Y.; Wada, A.; Komatsu, Y.; Ishihama, N.; Iwata, T. System modeling process with SysML assuming the use of models. In Proceedings of the 2022 IEEE International Symposium on Systems Engineering (ISSE), Vienna, Austria, 24–26 October 2022; pp. 1–8. [Google Scholar] [CrossRef]
  7. Pasek, Z.; Dufresne, D.; Thalmann, D. The role of Model-Based Systems Engineering (MBSE) in managing complex systems. Int. J. Syst. Syst. Eng. 2015, 6, 284–300. [Google Scholar]
  8. Blanchard, B.S.; Fabrycky, W.J. System Engineering and Analysis; Prentice Hall: Englewood Cliffs, NJ, USA, 2011. [Google Scholar]
  9. Eppinger, S.D.; Browning, T.R. Design Structure Methods and Applications; MIT Press: Cambridge, MA, USA, 2012; Available online: https://www.researchgate.net/publication/371165639_Proceedings_2017_DSM_Conference (accessed on 9 December 2025).
  10. Boehm, B.W. Software Engineering Economics; Prentice Hall: Englewood Cliffs, NJ, USA, 1981. [Google Scholar]
  11. Cohn, M. Agile Estimating and Planning; Prentice Hall: Englewood Cliffs, NJ, USA, 2005. [Google Scholar]
  12. Efatmaneshnik, M.; Ryan, M.J. A general framework for measuring system complexity. Complexity 2016, 21, 533–546. [Google Scholar] [CrossRef]
  13. Gullapalli, L.B.; Bhatnager, A.; de Saqui-Sannes, P.; Vingerhoeds, R. Quantitative Assessment of Complexity in SysML Models. In Proceedings of the Modelling and Simulation 2024, ESM 2024, San Sebastian, Spain, 23–25 October 2024; EUROSIS: Oostende, Belgium, 2024; pp. 275–280. [Google Scholar]
  14. Flood, R.L. Liberating Systems Theory. In Contemporary Systems Thinking; Plenum Press: New York, NY, USA, 1990. [Google Scholar]
  15. Van de Poel, I. Values in engineering design. In Handbook of the Philosophy of Science. Volume 9: Philosophy of Technology and Engineering Sciences; Elsevier: North-Holland, The Netherlands, 2009; pp. 973–1006. [Google Scholar]
  16. Kokol, P.; Podgorelec, V.; Habrias, H.; Rabia, N. The Complexity of Formal Specifications—Assessments by alpha-Metric. SIGPLAN Not. 1999, 34, 84–88. [Google Scholar] [CrossRef]
  17. yat Cheung, T.; Ren, S.; Mak, W.M. A tool for assessing the quality of distributed software designs. In Proceedings of the Proceedings Software Education Conference (SRIG-ET’94), Dunedin, New Zealand, 22–25 November 1994; pp. 42–49. [Google Scholar] [CrossRef]
  18. Huang, S.; Lai, R. On measuring the complexity of an estelle specification. J. Syst. Softw. 1998, 40, 165–181. [Google Scholar] [CrossRef]
  19. Masmali, O.; Badreddin, O. Comprehensive Model-Driven Complexity Metrics for Software Systems. In Proceedings of the 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), Macau, China, 11–14 December 2020; pp. 674–675. [Google Scholar] [CrossRef]
  20. Cruz-Lemus, J.A.; Maes, A.; Genero, M.; Poels, G.; Piattini, M. The impact of structural complexity on the understandability of UML statechart diagrams. Inf. Sci. 2010, 180, 2209–2220. [Google Scholar] [CrossRef]
  21. Davis, A. Software Engineering and Analysis; Pearson: London, UK, 1993. [Google Scholar]
  22. Parsons-Hann, H.; Liu, K. Measuring Requirements Complexity to Increase the Probability of Project Success. In Proceedings of the Seventh International Conference on Enterprise Information Systems—Volume 3: ICEIS, Miami, FL, USA, 25–28 May 2005. [Google Scholar]
  23. Wrigley, R.; Dexter, A. Estimating Project Effort Based on Skill Levels. J. Proj. Manag. 1987, 8, 100–110. [Google Scholar]
  24. Kirmani, M.M.; Wahid, A. Impact of Modification Made in Re-UCP on Software Effort Estimation. J. Softw. Eng. Appl. 2015, 8, 276. [Google Scholar] [CrossRef]
  25. Kusumoto, S.; Matukawa, F.; Inoue, K.; Hanabusa, S.; Maegawa, Y. Estimating effort by use case points: Method, tool and case study. In Proceedings of the 24th International Conference on Software Engineering, Orlando, FL, USA, 19–25 May 2002; pp. 601–606. [Google Scholar] [CrossRef]
  26. Clemmons, R.K. Project Estimation With Use Case Points. CrossTalk J. Def. Softw. Eng. 2006, 19, 18–22. [Google Scholar]
  27. Sinha, K.; Suh, E.S. Pareto-optimization of complex system architecture for structural complexity and modularity. Res. Eng. Des. 2018, 29, 123–141. [Google Scholar] [CrossRef]
  28. Lopez, V.; Thomas, L. Metric for Structural Complexity Assessment of Space Systems Modeled Using the System Modeling Language. Aerospace 2022, 9, 612. [Google Scholar] [CrossRef]
  29. Lankford, J. Measuring system and software architecture complexity. In Proceedings of the IEEE Aerospace Conference Proceedings (Cat. No.03TH8652), Big Sky, MT, USA, 8–15 March 2003. [Google Scholar]
  30. Dalvi, A.S.; El-Mounayri, H. Integrated System Architecture Development Framework And Complexity Assessment. In Proceedings of the ASME 2021 International Mechanical Engineering Congress and Exposition, Virtual, Online, 1–5 November 2021. [Google Scholar]
  31. Friedenthal, S. SysML v2 Basics: Transition from SysML v1 to SysML v2; OMG MBSE Wiki: Torrance, CA, USA, 2024. [Google Scholar]
  32. OMG. SysML v2 Beta Specifications; Object Management Group (OMG): Milford, MA, USA, 2023. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.