A Business-Oriented Approach to Automated Threat Analysis for Large-Scale Infrastructure Systems
Abstract
1. Introduction
- Proposal: Introduction of a novel business-operation-based threat analysis methodology that can be applied in upstream system development phases, enabling systematic and reusable security design.
- Methodology: Development of a structured approach that leverages the 5W framework and an automated algorithm to reduce reliance on individual expertise and minimize redundant analysis.
- Effectiveness: Demonstration through a case study that the proposed method achieves approximately 84% workload reduction compared to conventional manual approaches, while preserving analytical completeness and accuracy.
- Significance: Provision of a practical foundation that facilitates efficient security design from the upstream stages of system development, thereby promoting the construction of secure and resilient infrastructure systems.
- Scope: Focus on system-level security design as defined in IEC 62443-3-3, emphasizing technical requirements for integrated system architectures and industrial applicability.
2. Challenges in Security Design for Social Infrastructure Systems
2.1. Threat Modeling Methodological Approaches
2.2. Asset Definition and Granularity Frameworks
- (a)
- (b)
- Function/component-centric approaches [18], emphasizing vulnerabilities of hardware and software components.
- (c)
2.3. Development Phase Integration Perspectives
2.4. Automation and Tool Integration Research
2.5. Industrial Control System-Oriented Research
2.6. Research Gap and Positioning
- Upstream automation: Providing automated threat analysis capabilities from minimal upstream information (roles, locations, high-level functions, business objectives).
- Integrated asset definition: Introducing business operations as comprehensive analytical units that encompass informational, functional, and physical assets.
- Systematic workload reduction: Demonstrating quantifiable workload reduction (up to 84%) while maintaining analytical completeness and accuracy.
3. Proposed Business-Based Threat Analysis Method
3.1. Policy for Solving the Issues
- Principle 1: Business Operations as Analytical Assets
- Principle 2: Systematic Organization of System Information for Threat Analysis
- Principle 3: Algorithmic Design for Automated Threat Analysis
3.2. Asset Definition Framework
3.2.1. Selection of Modeling Notation
3.2.2. Business Operation Definition
3.2.3. Sequence Diagram Modeling Approach
3.2.4. Handling Complex Process Structures
3.3. Template Structure Construction and Definition
3.3.1. Hybrid Construction Approach
3.3.2. Standardization of Analytical Elements (5W)
- Where (Asset Characteristics and Attack Surface): This element characterizes the “Attack Surface” and physical/logical attributes of the target. To ensure objective classification, the attributes in the “Where” template were standardized based on established security taxonomies:
- ○
- ○
- Intrusion Method: To align with vulnerability assessment standards, the classification of intrusion paths (Network, Adjacent, Local) strictly follows the CVSS (Common Vulnerability Scoring System) [31] Attack Vector definitions.
- ○
- Functional Complexity Type (Refinement): Furthermore, supported by NIST SP 800-82 [32] regarding ICS component characteristics, we distinguish between “single-function” (e.g., sensors, actuators) and “multi-function” devices. Single-function devices typically execute exclusively predetermined control operations with limited external interface availability. Consequently, they demonstrate minimal susceptibility to confidentiality-related threats (e.g., large-scale data exfiltration) compared to multi-function devices (e.g., servers). Incorporating this distinction enables the framework to reduce analytical redundancy by filtering out unrealistic threats for specific device types.
- ○
- Who (Attacker): This element is mapped to CVSS [31] (Common Vulnerability Scoring System) metrics, specifically focusing on the “Privileges Required” (e.g., Network, Adjacent, Physical) and attacker attributes.
- Why (Motivation): This element categorizes the attacker’s intent, mapped to NIST SP 800-30 [33] definitions (Adversarial vs. Non-adversarial/Accidental).
- What (Attack Description): “What” defines the specific nature of the attack targeting the asset. To ensure comprehensive yet contextual descriptions, we integrated STRIDE (for coverage) with the concept of “Execution Flow” from MITRE CAPEC [34]. While STRIDE categorizes the “means” (e.g., Spoofing), our templates adopt the CAPEC narrative structure to describe the full attack context as a chronological sequence: (1) Prerequisites → (2) Execution → (3) Impact. For example, instead of a static label, a threat is described as: “A privileged insider acquires credentials (Prerequisite) → Impersonates a legitimate user (Execution) → Exfiltrates confidential data (Impact).” This bridging of abstract categories with concrete execution flows ensures the templates are robust for system design.
3.3.3. Functional Classification of Templates
- Analytical Templates (“Where”): Given that threat occurrence locations exhibit system-specific variations due to differences in device nomenclature and network topology, “Where” serves as an input template (Analytical Template) for internal processing to characterize the target system.
- Output Templates (“Who”, “Why”, “What”): These elements serve as output templates for materializing the analytical results. These templates encapsulate domain-specific characteristics, enabling direct reuse for reporting and result generation.
3.3.4. Maintenance and Expansion Policy
- Project Feedback: When a new threat scenario is identified during manual analysis in actual projects, it is reviewed by a committee of security experts. If the threat is deemed generic and applicable to other systems, it is formalized as a new template and added to the library.
- Standard Updates: The templates are periodically reviewed and updated in alignment with revisions to the underlying standards (e.g., updates to MITRE CAPEC or CVSS metrics). This mechanism minimizes the “proprietary bias” by constantly validating the library against both external standards and real-world field data.
3.4. Target System Information Collection Format and Threat Analysis Algorith
3.4.1. Information Collection Framework
- (1)
- Where: Threat Occurrence Point Identification
- Provides user interface capabilities (e.g., mouse, keyboard, monitor interfaces);
- Supports external media connections (e.g., USB ports, removable storage);
- Maintains Internet connectivity capabilities.
- (2)
- Who: Threat Actor Classification
- (3)
- Why: Intent and Motivation Analysis
- (4)
- What: Threat Event Determination
3.4.2. Automated Threat Analysis Algorithm
Algorithmic Procedures
- (1)
- Threat Occurrence Extraction (Where)
| Algorithm 1 Threat Occurrence Extraction |
| Input: Sequence Diagram D, Interview Answers Ans, Topology Topo Output: List of Potential Attack Paths L_where 1: L_where ← ∅ 2: for each operation op ∈ D do 3: Devs ← ExtractDevices(op) 4: Srcs ←{d ∈ Devs|(Ans[d].Q6 = “Yes”∨Ans[d].Q7 = “Yes”) 5: if ∃ d ∈ Devs: Ans[d].Q8 = “Yes” then 6: Srcs ← Srcs ∪ {“Internet”} 7: end if 8: 9: for each source s ∈ Srcs do 10: for each target t ∈ Devs do 11: Paths ← EnumeratePaths(s, t, Topo, op) 12: for each p ∈ Paths do 13: if IsDirect(p) then 14: rp ← “Direct” 15: else if s=“Internet” ∨ Ans[s].Q11=“External” then 16: rp ← “External” 17: else 18: rp ← “Internal” 19: end if 20: L_where.append({op, s, t, p, rp}) 21: end for 22: end for 23: end for 24: end for 25: return L_where |
- (2)
- Attacker Extraction (Who)
- (3)
- Motive Extraction (Why)
| Algorithm 2 Attacker & Motivation Extraction |
| Input: Path List L_where, Interview Answers Ans Output: List of Threat Triples L_triples //Note: Each Triple consists of {Where (Path), Who (Attacker), Why (Motivation)} 1: L_triples ← ∅ 2: for each entry x ∈ L_where do 3: //Who (Attacker) Extraction 4: Actors ← GetCandidateActors(x.source) 5: for each actor ∈ Actors do 6: entry’ ← x ∪ {actor} 7: 8: //Why (Motivation) Extraction 9: if actor ∈ ExternalCategory then 10: L_triples.append(entry’ ∪ {Motive: “Intentional”}) 11: else 12: src ← GetFirstHop(x.path) 13: L_triples.append(entry’ ∪ {Motive: “Intentional”}) 14: //If an operator is present, unintentional errors are also possible 15: if Ans[src].Q9 = “Yes” then 16: L_triples.append(entry’ ∪ {Motive: “Unintentional”}) 17: end if 18: end if 19: end for 20: end for 21: return L_triples |
- (4)
- Threat Event Extraction (What)
| Algorithm 3 Threat Event Extraction |
| Input: Threat Triples L_triples, Asset Values CIA, Knowledge Base KB, Interview Answers Ans Output: Final Threat List T_final 1: T_final ← ∅, Seen ← ∅ 2: for each t ∈ L_triples do 3: dev ← DecisiveDevice(t.path) 4: Context ← ExtractContext(t, Ans[dev]) 5: 6: //Ambiguity Resolution/Pruning Rule 7: //Skip confidentiality threats for simple single-function devices unless sensitive 8: if Ans[dev].Q10 = “Yes” and CIA[t.op] = {“C”} then 9: if not (HoldsSensitiveData(t)) then 10: continue 11: end if 12: end if 13: 14: Matches ← QueryKB(KB, Context) 15: for each ev ∈ Matches do 16: sig ← (t.op, t.path, t.actor, t.motive, ev) 17: if sig ∉ Seen then 18: T_final.append(sig) 19: Seen.add(sig) 20: end if 21: end for 22: end for 23: return T_final |
Algorithm Implementation Notes
Implementation Availability and Reproducibility
Computational Complexity
4. Evaluation Framework
4.1. Evaluation Methodology and Targets
4.1.1. Qualitative Accuracy Evaluation Methodology
- Ground Truth Definition: The threat lists finalized in the actual projects, which passed multiple design reviews by senior security experts, were defined as the “Ground Truth.”
- Verification Procedure: We compared the output of the proposed automation method against this Ground Truth to verify Coverage (whether critical threats were successfully extracted) and Consistency (whether the generated threat descriptions were semantically consistent with expert analysis).
4.1.2. Quantitative Workload Reduction Evaluation Methodology
- (1)
- Quantitative Workload Reduction Evaluation
- Baseline Procedure ()
- Proposed Procedure ()
- Metrics
- (2)
- Qualitative Accuracy Evaluation
- Ground Truth Definition: The threat lists finalized in the actual projects were defined as the “Ground Truth.” These lists were developed and refined through multiple design reviews by a team of eight security researchers (with experience ranging from 2 to 20+ years, including a lead with 20+ years, three seniors with 8–10 years, and four mid-levels with 5–7 years).
- Verification Procedure: We compared the output of the proposed automation method against this Ground Truth to verify Coverage (whether critical threats were successfully extracted) and Consistency (semantic alignment). The verification followed a structured protocol:
- 1.
- Manual Analysis: Researchers independently performed threat identification for each device in Use Cases X and Y using the STRIDE and 5W methods.
- 2.
- Comparative Analysis: The automated results were evaluated based on the consistency of the 5W elements. A threat was considered “semantically consistent” only if its 5W components and STRIDE categories matched the expert-generated Ground Truth.
- Documentation: To ensure the reproducibility of the assessment, detailed activity logs were maintained, and the comparison results for each device were summarized in a matrix, providing a transparent view of the overlap between automated and manual analysis.
4.1.3. Evaluation Targets
- Case X: Electric Power System (System A)This case represents an “Open/Public” environment characterized by High Physical Exposure and Mobility. It constitutes a large-scale power control infrastructure involving multiple distributed controllers and servers executing production monitoring and remote metering operations.
- ○
- Characteristics: Operations involve Smart Meters (SM) located outdoors (accessible to third parties) and portable Handy Terminal (HT).
- ○
- Data References: The business-operation assets for evaluation are shown in Figure 5. Representative input data samples collected via the established format are provided in Table 8. In this evaluation, the security objectives for the target operational asset were defined as Confidentiality: Required, Integrity: Required, and Availability: Not Required. The detailed CIA value assignments for the specific information and functional assets constituting these operations are provided in Appendix B.
- ○
- Verification Methodology: To establish a rigorous “Ground Truth” for this complex environment, a team of eight security researchers (with 5–20+ years of experience, including one lead, three seniors, and four mid-level experts) performed a manual threat analysis. They applied the STRIDE and 5W methods to all operational assets identified in Figure 5. The proposed method’s accuracy was then quantitatively evaluated by comparing its output against this expert-generated baseline, focusing on the semantic consistency of the 5W elements for each identified threat.
- Case Y: Hydroelectric System (System B)This case represents a “Closed/Private” environment characterized by Stationary assets. It constitutes a medium-scale energy management system focused on data collection, aggregation, and command transmission functionalities within an Industrial Control System (ICS).
- ○
- Characteristics: Operations involve typical pump controls using single-function devices located within secure facilities (secure zones).
- ○
- Data References: The business-operation assets are shown in Figure 6. Representative input data samples are provided in Table 9. In this evaluation, the security objectives for the target operational asset were defined as Confidentiality: Not Required, Integrity: Required, and Availability: Required. The detailed CIA value assignments for the specific information and functional assets constituting these operations are provided in Appendix B.
- ○
- Verification Methodology: The verification for this case was conducted by a team of five experts: one lead researcher (20+ years of experience), one security researcher (5 years), and three security engineers from the industrial security division. Following the same protocol as Case X, the team performed a manual threat analysis using the STRIDE and 5W methods for all assets depicted in Figure 6. The consistency between these expert-identified threats and the automated results was evaluated based on the semantic alignment of the 5W elements.
4.2. Accuracy Evaluation Results
4.3. Workload Reduction and Development Period Effectiveness Evaluation
4.3.1. Baseline Performance Assessment
4.3.2. Proposed Methodology Performance
4.3.3. Comparative Accuracy Assessment
4.3.4. Asset Granularity Comparison
4.3.5. Risk Assessment Considerations
4.4. Discussion and Practical Validity
4.4.1. Practical Validation and Industrial Track Record
4.4.2. Limitations and Scope of Application
- Trade-off in Granularity: While the proposed method significantly improves efficiency in the upstream phase, the abstraction of assets into business operations implies a trade-off regarding granularity. Specific implementation details (e.g., buffer overflows in specific software stacks) or vulnerabilities independent of operational flows may be homogenized during the modeling process. Consequently, there is a potential risk of underestimating threats that rely on low-level technical specifics. Therefore, this method is best utilized as a screening tool for system-level security design. For critical subsystems or during the subsequent detailed design and implementation phases, we recommend complementing this approach with conventional, fine-grained threat analysis methods.
- Dependency on Knowledge Base Completeness: A fundamental limitation of this rule-based approach is that the analysis accuracy (specifically recall) is strictly bounded by the completeness of the Knowledge Base. Unlike AI-based anomaly detection, this method cannot identify novel threat patterns that do not exist in the predefined templates. Consequently, if the Knowledge Base lacks a specific attack scenario, the analysis will result in a False Negative. To mitigate this, the proposed method relies on the STRIDE model to ensure that, at a minimum, high-level threat categories are comprehensively covered, even if specific novel variants are missed.
5. Conclusions
5.1. Research Contributions
5.2. Quantitative Validation Results
5.3. Future Research Directions
6. Patents
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| 5W | Who, What, When, Where, Why |
| BPMN | Business Process Model and Notation |
| CAPEC | Common Attack Pattern Enumeration and Classification |
| CIA | Confidentiality, Integrity, and Availability |
| CVSS | Common Vulnerability Scoring System |
| DFD | Data Flow Diagram |
| DREAD | Damage, Reproducibility, Exploitability, Affected Users, Discoverability |
| ICS | Industrial Control Systems |
| IF | Interface |
| MITRE | MITRE Corporation |
| PASTA | Process for Attack Simulation and Threat Analysis |
| PLC | Programmable Logic Controller |
| STRIDE | Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege |
| STPA-Sec | System-Theoretic Process Analysis for Security |
| SysML | Systems Modeling Language |
| UML | Unified Modeling Language |
Appendix A. Mapping Definitions
| Map ID | Question ID (Table 3) | Template Field (Table 1 and Table 2) |
|---|---|---|
| M-01 | Q1 | When |
| M-02 | Q2 | What |
| M-03 | Q3 | WhereAttack Path Type |
| M-04 | Q4 | WhereAttack Path Type (Identify candidate targets) |
| M-05 | Q5 | WhereMachine Type |
| M-06 | Q6 | WhereAttack Pathe Type (Identify candidate sources within the target system) |
| M-07 | Q7 | WhereAttack Pathe Type (Identify candidate sources within the target system) |
| M-08 | Q8 | WhereAttack Path Type (Access whether Internet-besed entry points may exist) |
| M-09 | Q9 | Who |
| M-10 | Q10 | WhereFunctional Complexity Type |
| M-11 | Q11 | WherePlacement Type |
Appendix B. Detailed Security Objectives for Constituent Assets
| Asset | CIA Value of Assets | |||
|---|---|---|---|---|
| C | I | A | ||
| Information | Measurement Target Notification | Not Required | Required | Not Required |
| Power Data Request | Not Required | Required | Not Required | |
| Power Data Response | Required | Required | Not Required | |
| Aggregate Power Data | Required | Required | Not Required | |
| Function | Forward Target List | Not Required | Required | Not Required |
| Measure Power Consumption | Not Required | Required | Not Required | |
| Aggregate Power Data | Required | Required | Not Required | |
| Summarize Power Data | Required | Required | Not Required | |
| Business | Remote meter reading | Required | Required | Not Required |
| Asset | CIA Value of Assets | |||
|---|---|---|---|---|
| C | I | A | ||
| Information | Water level | Not Required | Required | Required |
| Pump Start Command | Not Required | Required | Required | |
| Status data (water level, pump ON) | Not Required | Required | Not Required | |
| Function | Acquire Level Sensor Value | Not Required | Required | Required |
| Threshold Decision | Not Required | Required | Required | |
| Display Screen | Not Required | Required | Required | |
| Business | Remote pump control | Not Required | Required | Required |
References
- National Center of Incident Readiness and Strategy for Cybersecurity (NISC). Action Plan on Cybersecurity for Critical Infrastructure. September 2014. Available online: https://www.nisc.go.jp/pdf/policy/infra/cip_policy2024.pdf (accessed on 30 August 2025).
- IEC 62443-2-1; Security for Industrial Automation and Control Systems—Part 2-1: Establishing an Industrial Automation and Control System Security Program. International Electrotechnical Commission (IEC): Geneva, Switzerland, 2018.
- CIP-010-4; Cyber Security: Configuration Change Management and Vulnerability Assessments. North American Electric Reliability Corporation (NERC): Atlanta, Georgia, 2023.
- National Institute of Standards and Technology (NIST). NIST IR 7628 Rev. 1—Smart Grid Cybersecurity Strategy and Requirements. September 2014. Available online: https://nvlpubs.nist.gov/nistpubs/ir/2014/NIST.IR.7628r1.pdf (accessed on 15 November 2025).
- National Institute of Standards and Technology (NIST). SP 800-30 Rev. 1: Guide for Conducting Risk Assessments. September 2012. Available online: https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-30r1.pdf (accessed on 15 November 2025).
- National Institute of Standards and Technology (NIST). SP 800-160 Vol. 1: Systems Security Engineering—Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems; NIST: Gaithersburg, MD, USA, 2016.
- IEC 62443-1-1; Industrial Communication Networks—Network and System Security—Terminology, Concepts, and Models. International Electrotechnical Commission (IEC): Geneva, Switzerland, 2019.
- IEC 62443-3-3; Industrial Communication Networks—Network and System Security—System Security Requirements and Security Levels. International Electrotechnical Commission (IEC): Geneva, Switzerland, 2019.
- Orimo, M.; Tsuhara, S.; Yamamoto, N.; Sadaki, R. A Planning Method for Security Countermeasure Formulation in Information Systems. J. Inf. Process. Soc. Jpn. (IPSJ) 2000, 41, 171–187. (In Japanese) [Google Scholar]
- Schneier, B. Attack Trees. Dr. Dobb’s J. Softw. Tools 1999, 24, 21–29. [Google Scholar]
- Swiderski, F.; Snyder, W. Threat Modeling; Microsoft Press: Washington, DC, USA, 2004. [Google Scholar]
- UcedaVelez, T.; Morana, M.M. Risk Centric Threat Modeling: Process for Attack Simulation and Threat Analysis; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
- Sindre, G.; Opdahl, A.L. Eliciting Security Requirements with Misuse Cases. In Proceedings of the 37th International Conference on Technology of Object-Oriented Languages and Systems, Sydney, NSW, Australia, 20–23 November 2000; pp. 120–131. [Google Scholar]
- Kumar, R. An Attack Tree Template Based on Feature Diagram Hierarchy. In Proceedings of the 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys), Nadi, Fiji, 14–16 December 2020. [Google Scholar]
- Da Silva, M.; Puys, M.; Thevenon, P.; Mocanu, S.; Nkawa, N. Automated ICS Template for STRIDE Microsoft Threat Modeling Tool. In Proceedings of the 18th International Conference on Availability, Reliability and Security (ARES ’23), Benevento, Italy, 29 August–1 September 2023; Article No. 118, pp. 1–7. [Google Scholar] [CrossRef]
- ISO/IEC 27005:2018; Security Techniques—Information Security Risk Management. International Organization for Standardization: Geneva, Switzerland, 2018.
- Information-Technology Promotion Agency (IPA). Control System Security Risk Analysis Guide, 2nd ed.; IPA: Tokyo, Japan, 2023.
- Takao, O.; Hidehiko, T. Efficient Security Requirements Analysis Method. J. Inf. Process. Soc. Jpn. (IPSJ) 2009, 50, 2484–2499. (In Japanese) [Google Scholar]
- Zareen, S.; Akram, A.; Khan, S.A. Security Requirements Engineering Framework with BPMN 2.0.2 Extension Model for Development of Information Systems. Appl. Sci. 2020, 10, 4981. [Google Scholar] [CrossRef]
- Hacks, S.; Lagerström, R.; Ritter, D. Towards Automated Attack Simulations of BPMN-Based Processes. In Proceedings of the 2021 IEEE 25th International Enterprise Distributed Object Computing Conference (EDOC), Gold Coast, Australia, 25–29 October 2021. [Google Scholar]
- Rosado, D.G.; Sánchez, L.E.; Varela-Vaca, Á.J.; Santos-Olmo, A.; Gómez-López, M.T.; Gasca, R.M.; Fernández-Medina, E. Enabling Security Risk Assessment and Management for Business Process Models. J. Inf. Secur. Appl. 2024, 84, 103829. [Google Scholar] [CrossRef]
- Zareen, S.; Anwar, S.M. BPMN Extension Evaluation for Security Requirements Engineering Framework. Requir. Eng. 2024, 29, 261–278. [Google Scholar] [CrossRef]
- Granata, D.; Rak, M.; Salzillo, G.; Di Guida, G.; Petrillo, S. Automated Threat Modeling and Risk Analysis in e-Government Using BPMN. Connect. Sci. 2023, 35, 2284645. [Google Scholar] [CrossRef]
- Da Silva, M.; Mocanu, S.; Puys, M.; Thevenon, P. Safety-Security Convergence: Automation of IEC 62443-3-2. Comput. Secur. 2025; early access/in press. [Google Scholar]
- Young, W. Basic Introduction to STPA for Security (STPA-Sec). MIT STAMP Workshop. July 2020. Available online: http://psas.scripts.mit.edu/home/wp-content/uploads/2020/07/STPA-Sec-Tutorial.pdf (accessed on 15 November 2025).
- Saßnick, O.; Rosenstatter, T.; Schäfer, C.; Huber, S. STRIDE-based Methodologies for Threat Modeling of Industrial Control Systems: A Review. In Proceedings of the 2024 IEEE 7th International Conference on Industrial Cyber-Physical Systems (ICPS), St. Louis, MO, USA, 12–15 May 2024. [Google Scholar]
- Kim, K.H.; Kim, K.; Kim, H.K. STRIDE and DREAD Combined Threat Evaluation for the Distributed Control System in Oil Refinery. ETRI J. 2022, 44, 991–1003. [Google Scholar] [CrossRef]
- Otahara, C.; Iguchi, S.; Uchiyama, Y.; Kayashima, M. Proposal of the Security Design Method in Infrastructure Systems by Template Application. IPSJ J. 2017, 58, 1523–1534. [Google Scholar]
- Joint Task Force. Security and Privacy Controls for Information Systems and Organizations; NIST Special Publication (SP) 800-53 Rev. 5; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2020.
- Souppaya, M.; Scarfone, K. Guidelines for Managing the Security of Mobile Devices in the Enterprise; NIST SP 800-124 Rev. 1; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2013. [Google Scholar]
- FIRST.org. Common Vulnerability Scoring System v3.1: Specification Document. June 2019. Available online: https://www.first.org/cvss/v3.1/specification-document (accessed on 15 November 2025).
- Stouffer, K.; Pease, M.; Tang, C.; Zimmerman, T.; Pillitteri, V.; Lubell, S. Guide to Operational Technology (OT) Security; NIST SP 800-82 Rev. 3; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2023. [Google Scholar]
- Joint Task Force. Guide for Conducting Risk Assessments; NIST SP 800-30 Rev. 1; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2012.
- The MITRE Corporation. Common Attack Pattern Enumeration and Classification (CAPEC). Available online: https://capec.mitre.org/ (accessed on 15 November 2025).






| Item | Template | ||
|---|---|---|---|
| Where | Attack Path Type | Source | |
| Target | |||
| External Network | |||
| Machine Type | Device | Portable Device | |
| Stationary Device | |||
| NW device | Wide area network-compatible | ||
| Local area network-compatible | |||
| Intrusion Method Type *1 | Local | ||
| Adjacent | |||
| Network | |||
| Functional Complexity Type *2 | Single-function | ||
| Multi-function | |||
| Placement Type *2 | Internal | ||
| External | |||
| When | Development | ||
| Operation | |||
| Maintenance | |||
| Who | Authorized Insider | ||
| Unauthorized Insider | |||
| Outsider | |||
| Why | Intentional | ||
| Accidental | |||
| Source | Target | CIA | ||
|---|---|---|---|---|
| Step1: Unauthorized Access | Step2-A: Denial of Service | A | ||
| Step2-B: Unauthorized Access | - | |||
| Step3-A: Spoofing | C | |||
| Step3-B: Tampering | I | |||
| Step3-C: Privilege Escalation | - | |||
| Step4-A: Spoofing | C | |||
| Step4-B: Tampering | I | |||
| # | Items to Be Collected | Description | |
|---|---|---|---|
| 1 | Asset | Business operations constituting assets | |
| 2 | Confidentiality, Integrity, and Availability (CIA) Impact of Assets (Operations) | Impact Level When Confidentiality, Integrity, or Availability of Operations Is Compromised | |
| 3 | Network configuration | Network Information Accessible by device Composing the Target System | |
| 4 | Workflow | Information flow exchanged between devices constituting the operation | |
| 5 | Device constituting the operations | What is the device type? | Is it a network device, a fixed-installation device, or a portable/mobile device? |
| 6 | Does it have a IF? | Does the device constituting the operation have a user interface? | |
| 7 | Can external devices be connected? | Can devices be connected to the device constituting the operation? | |
| 8 | Is an internet connection available? | Is internet connectivity available for the device constituting the operation? | |
| 9 | Is an operator present? | Is there an operator available during operation? | |
| 10 | Is the device single-functions? | Does the device support large-scale recording/logging? | |
| 11 | Is the device located external? | Is the device in a location accessible to third parties without physical access control? | |
| Target System Info | Answer Results | ||||
|---|---|---|---|---|---|
| Production Control Server | Client PC | Data Historian | Monitoring and Control Server | PLC | |
| Does it have a IF? (#6) | Yes | Yes | Yes | Yes | No |
| Can external devices be connected? (#7) | Yes | Yes | Yes | No | No |
| Is an internet connection available? (#8) | Yes | Yes | No | No | No |
| Attack Vectors | Attack Path Patterns | |
|---|---|---|
| Source | Target | |
| Client PC | Data Historian | Source ≠ target |
| Client PC | Client PC | Source = target |
| Internet | Monitoring and Control Server | Source = Internet |
| Target System Info | Answer Results | ||||
|---|---|---|---|---|---|
| Production Control Server | Client PC | Data Historian | Monitoring and Control Server | PLC | |
| Is an operator present? (#9) | Yes | Yes | Yes | Yes | No |
| Target System Info | Answer Results | ||||
|---|---|---|---|---|---|
| Production Control Server | Client PC | Data Historian | Monitoring and Control Server | PLC | |
| What is the device type? (#5) | Stationary Device | Stationary Device | Stationary Device | Stationary Device | Stationary Device |
| Is the device single-functions? (#10) | No | No | No | No | Yes |
| Is the device located external? (#11) | No | No | No | No | No |
| Items to Be Collected | Answer Results | |||
|---|---|---|---|---|
| SM | HT | MDM | MAM | |
| What is the device type? | Stationary Device | Portable Device | Stationary Device | Stationary Device |
| Does it have a IF? | No | Yes | Yes | Yes |
| Can external devices be connected? | Yes | Yes | Yes | Yes |
| Is an internet connection available? | No | No | No | Yes |
| Is an operator present? | No | Yes | Yes | Yes |
| Is the device single-functions? | Yes | Yes | No | No |
| Is the device located external? | Yes | No | No | No |
| Items to Be Collected | Answer Results | |||
|---|---|---|---|---|
| Level Sensor | PLC | Pump | Operator PC | |
| What is the device type? | Stationary Device | Stationary Device | Stationary Device | Stationary Device |
| Does it have a IF? | No | No | No | Yes |
| Can external devices be connected? | Yes | Yes | No | Yes |
| Is an internet connection available? | No | No | No | Yes |
| Is an operator present? | No | No | No | Yes |
| Is the device single-functions? | Yes | Yes | Yes | No |
| Is the device located external? | No | No | No | No |
| Threat Category | Means of Analysis | MDM | MAM | HT | SM | HE |
|---|---|---|---|---|---|---|
| Unauthorized Access | Pattern I | App | App | App | App | App |
| Pattern II | App | App | App | App | App | |
| Spoofing | Pattern I | App | App | App | App | App |
| Pattern II | App | App | App | App | App | |
| Tampering | Pattern I | App | App | App | App | App |
| Pattern II | App | App | App | App | App | |
| Repudiation | Pattern I | App | App | N/A | N/A | App |
| Pattern II | App | App | N/A | N/A | App | |
| Information disclosure | Pattern I | App | App | N/A | N/A | App |
| Pattern II | App | App | N/A | N/A | App | |
| Denial of service | Pattern I | N/A | N/A | N/A | N/A | N/A |
| Pattern II | N/A | N/A | N/A | N/A | N/A | |
| Elevation of privilege | Pattern I | App | App | N/A | N/A | App |
| Pattern II | App | App | N/A | N/A | App |
| Threat Category | Means of Analysis | Water Level Sensor | PLC | Pump | Operator PC |
|---|---|---|---|---|---|
| Unauthorized Access | Pattern I | App | App | App | App |
| Pattern II | App | App | App | App | |
| Spoofing | Pattern I | N/A | App | N/A | App |
| Pattern II | N/A | App | N/A | App | |
| Tampering | Pattern I | App | App | App | App |
| Pattern II | App | App | App | App | |
| Repudiation | Pattern I | N/A | App | N/A | App |
| Pattern II | N/A | App | N/A | App | |
| Information disclosure | Pattern I | N/A | N/A | N/A | N/A |
| Pattern II | N/A | N/A | N/A | N/A | |
| Denial of service | Pattern I | App | App | App | App |
| Pattern II | App | App | App | App | |
| Elevation of privilege | Pattern I | N/A | App | N/A | N/A |
| Pattern II | N/A | App | N/A | N/A |
| Work Items | Proposed Procedure | Baseline Procedure | ||
|---|---|---|---|---|
| Whole | 1 Business (Average Value) | 5W Method + STRIDE (Overall) | ||
| Definition of evaluation target | Information gathering & asset modeling | 0.84 man-month | 2 h | 3 man-months |
| CIA Value Definition of Assets | 0.21 man-month | 0.5 h | ||
| Threat Analytics | Threat Analytics | 0.0008 man-months (about 8 min) (Automation) | 0.00186 h (approx. 6.7 s) (Automation) | 7 man-months |
| Summary of analysis results | 0.5 man-month | 3 h | ||
| Total | 1.56 man-month | 5.5 h | 10 man-months | |
| # | Asset & When | Where (Source) | Where (Target) | Who | Why | What |
|---|---|---|---|---|---|---|
| 1 | Remote Meter Reading Business Process | SM | HT | Privileged insider | Intentional | (Step 1) The privileged insider logs in to the source. |
| (Step 2) Logging into the target. | ||||||
| (Step 3) Stealing the target’s confidential information (Information Disclosure). | ||||||
| 2 | MDM | SM | Privileged insider | Unintentional | (Step 1) The privileged insider logs in to the source. | |
| (Step 2) Sending requests that cause the target to cease operations. | ||||||
| 3, 4 | SM | HT | Privileged insider | Intentional | (Step 1) The privileged insider logs in to the source. | |
| (Step 2-1) Sends a request to collect target information(Information Disclosure). | ||||||
| (Step 2-2) Sends a request to cause malfunction in the target system(Tamparing). | ||||||
| (Step 2-3) Sends a request to cause unauthorized behavior in the target system(Tempering). | ||||||
| 5, 6 | SM | HT | Privileged insider | Intentional | (Step 1) The privileged insider accesses the source. | |
| (Step 2-1) Sends a request to collect information of the target system(Information Disclosure). | ||||||
| (Step 2-2) Sends a request to cause malfunction in the target(Tempering). | ||||||
| 7, 8 | MDM | SM | Non-privileged insider | Intentional | (Step 1) The non-privileged insider accesses the source. | |
| (Step 2-1) Sends a request to collect target data(Information Disclosure). | ||||||
| (Step 2-2) Sends a request that causes malfunction or false data transmission(Tempering). | ||||||
| 9 | SM | HT | External actor | Intentional | (Step 1) The external actor accesses the source. | |
| (Step 2) Launches a cyberattack on the target system via the source. | ||||||
| (Step 3) Sends multiple unauthorized requests causing system malfunction or data falsification(Tempering). |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Otahara, C.; Uchiyama, H.; Kayashima, M. A Business-Oriented Approach to Automated Threat Analysis for Large-Scale Infrastructure Systems. Computers 2026, 15, 66. https://doi.org/10.3390/computers15010066
Otahara C, Uchiyama H, Kayashima M. A Business-Oriented Approach to Automated Threat Analysis for Large-Scale Infrastructure Systems. Computers. 2026; 15(1):66. https://doi.org/10.3390/computers15010066
Chicago/Turabian StyleOtahara, Chiaki, Hiroki Uchiyama, and Makoto Kayashima. 2026. "A Business-Oriented Approach to Automated Threat Analysis for Large-Scale Infrastructure Systems" Computers 15, no. 1: 66. https://doi.org/10.3390/computers15010066
APA StyleOtahara, C., Uchiyama, H., & Kayashima, M. (2026). A Business-Oriented Approach to Automated Threat Analysis for Large-Scale Infrastructure Systems. Computers, 15(1), 66. https://doi.org/10.3390/computers15010066

