Static Analysis Techniques for Embedded, Cyber-Physical, and Electronic Software Systems: A Comprehensive Survey
Abstract
1. Introduction
- A thorough systematic examination of static analysis methods for electronic, embedded, and cyber-physical systems that follows PRISMA guidelines and transparently reports search strings, screening data, and quality evaluation (Section 2).
- A new, analytically based taxonomy that includes evidence-based comparison tables and links key technique families (data-flow analysis, symbolic execution, abstract interpretation, model checking, and constraint solving) to their technical trade-offs (soundness, precision, and scalability) and representative tools.
- Static analysis applications from a variety of domains, including blockchain smart contracts, safety-critical automotive and aerospace systems, security vulnerability identification, and AI/ML pipeline verification, with a focus on embedded and cyber-physical settings (Section 4).
- A critical examination of enduring problems and new lines of inquiry, such as automated program repair, hybrid static–dynamic approaches, machine-learning-enhanced analysis, and verification issues in cloud-native and autonomous systems, with a clear connection to the technical trade-offs noted in the taxonomy (Section 5 and Section 6).
- RQ1: What is a comprehensive taxonomy of the most popular static analysis approaches, and how do their underlying formal methods (such as data-flow analysis, symbolic execution, and abstract interpretation) compare in terms of precision, computational complexity, and soundness guarantees?
- RQ2: How is static analysis used, specialized, and verified in a number of important sectors, such as security vulnerability detection, automotive embedded systems, blockchain smart contract verification, and AI/ML pipeline testing?
- RQ3: What are the biggest shortcomings of the static analysis methods currently in use, and how are new advancements, like the use of machine learning, the development of hybrid static–dynamic methods, and automated program repair, resolving these problems and creating new research opportunities?
2. Background and Methodology
2.1. Research Methodology
2.2. Related Work and Positioning
3. Taxonomy of Static Analysis Techniques
3.1. Introduction to the Taxonomy
3.2. Core Analysis Techniques
3.2.1. Data-Flow Analysis
3.2.2. Symbolic Execution
3.2.3. Abstract Interpretation
3.2.4. Model Checking
3.2.5. Constraint Solving
3.3. Tool-Based Categorization
3.4. Comparative Analysis
4. Applications of Static Analysis in Embedded and Electronic Systems
4.1. Static Application Security Testing (SAST)
4.2. Smart Contract Analysis
4.3. Automotive/Embedded Systems
4.4. AI/ML Software Testing
5. Challenges and Limitations
5.1. Scalability and Performance
5.2. False Positives and Alert Fatigue
5.3. Soundness vs. Precision Trade-Off
5.4. Benchmarking and Evaluation Challenges
5.5. Integration into Development Workflows
5.6. Synthesis: Challenges as Manifestations of Core Trade-Offs
- The Balance Between Scalability and Precision. Because of their exponential complexity, sophisticated semantic analyses (symbolic execution, model checking) are only applicable to large-scale industrial codebases, notwithstanding their great precision. Scaling these techniques without unacceptable accuracy loss remains challenging; incremental and compositional approaches show promise but require further development.
- The Conflict Between Soundness and False Positives. While sound analyses (e.g., abstract interpretation) guarantee thoroughness, they are too imprecise and yield costly false positives for developers. Faulty analyses (such as pattern matching, common in industrial SAST tools) can miss significant problems while reducing false positives. Closing this gap, whether through machine-learning-assisted triage, probabilistic ranking, or better abstract domains, has always been a top research focus.
- The Gap Between Adoption and Usability. Even the most technically advanced tools are ineffective if they interfere with developer workflows, require extensive configuration, or display results without providing context that can be used. Translating analytical advancements into real-world impact requires research into collaborative defect management, intelligent defaults, and seamless CI/CD integration.
5.7. Complementary Approaches: Dynamic Analysis and Fuzzing
6. Trends and Future Directions
6.1. Machine-Learning-Enhanced Static Analysis
6.2. Hybrid Static–Dynamic Analysis Methods
6.3. Automated Program Repair
6.4. Language and Platform Evolution
6.5. Cloud-Native and Distributed Systems Analysis
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Anderson, R.; Barton, C.; Bölme, R.; Clayton, R.; Ganán, C.; Grasso, T.; Levi, M.; Moore, T.; Vasek, M. Measuring the changing cost of cybercrime. In Proceedings of the 2019 Workshop on the Economics of Information Security, Boston, MA, USA, 3–4 June 2019. [Google Scholar]
- US Government. The Equifax Data Breach; Majority Staff Report 115th Congress, US; House of Representatives Committee on Oversight and Government Reform: Washington, DC, USA, 2018.
- Youvan, D.C. Anatomy of a Financial Collapse: The Role of Technical Glitches in Modern Financial Systems. 2024. Available online: https://www.researchgate.net/profile/Douglas-Youvan/publication/382968606_Anatomy_of_a_Financial_Collapse_The_Role_of_Technical_Glitches_in_Modern_Financial_Systems/links/66b52c858f7e1236bc459eea/Anatomy-of-a-Financial-Collapse-The-Role-of-Technical-Glitches-in-Modern-Financial-Systems.pdf (accessed on 10 February 2026).
- Møller, A.; Schwartzbach, M.I. Static Program Analysis; Department of Computer Science, Aarhus University: Aarhus, Denmark, 2020; Available online: https://cs.au.dk/~amoeller/spa/ (accessed on 10 February 2026).
- Zhou, Y.; Sharma, A. Automated identification of security issues from commit messages and bug reports. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, Paderborn, Germany, 4–8 September 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 914–919. [Google Scholar]
- Wang, S.; Liu, T.; Tan, L. Automatically learning semantic features for defect prediction. In Proceedings of the 38th International Conference on Software Engineering, Austin, TX, USA, 14–22 May 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 297–308. [Google Scholar]
- Sadowski, C.; Aftandilian, E.; Eagle, A.; Miller-Cushon, L.; Jaspan, C. Lessons from building static analysis tools at google. Commun. ACM 2018, 61, 58–66. [Google Scholar] [CrossRef]
- Myrbakken, H.; Colomo-Palacios, R. DevSecOps: A multivocal literature review. In International Conference on Software Process Improvement and Capability Determination; Springer International Publishing: Cham, Switzerland, 2017; pp. 17–29. [Google Scholar] [CrossRef]
- Vassallo, C.; Panichella, S.; Palomba, F.; Proksch, S.; Zaidman, A.; Gall, H.C. Context is king: The developer perspective on the usage of static analysis tools. In 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER); IEEE: New York, NY, USA, 2018; pp. 38–49. [Google Scholar]
- Hu, C. Machine Learning Slashes the Testing Needed to Work Out Battery Lifetimes. 2026, pp. 41–42. Available online: https://www.nature.com/articles/d41586-026-00168-w (accessed on 10 February 2026).
- Vassallo, C.; Panichella, S.; Palomba, F.; Proksch, S.; Gall, H.C.; Zaidman, A. How developers engage with static analysis tools in different contexts. Empir. Softw. Eng. 2020, 25, 1419–1457. [Google Scholar] [CrossRef]
- Li, Y.; Liu, W.; Liu, Q.; Zheng, X.; Sun, K.; Huang, C. Complying with ISO 26262 and ISO/SAE 21434: A safety and security co-analysis method for intelligent connected vehicle. Sensors 2024, 24, 1848. [Google Scholar] [CrossRef] [PubMed]
- Rierson, L. Developing Safety-Critical Software: A Practical Guide for Aviation Software and DO-178C Compliance; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Liu, M.; Wang, J.; Lin, T.; Ma, Q.; Fang, Z.; Wu, Y. An empirical study of the code generation of safety-critical software using llms. Appl. Sci. 2024, 14, 1046. [Google Scholar] [CrossRef]
- Beyer, D. Software Verification and Verifiable Witnesses: (Report on SV-COMP 2015). In International Conference on Tools and Algorithms for the Construction and Analysis of Systems; Springer: Berlin/Heidelberg, Germany, 2015; pp. 401–416. [Google Scholar]
- Lenarduzzi, V.; Pecorelli, F.; Saarimaki, N.; Lujan, S.; Palomba, F. A critical comparison on six static analysis tools: Detection, agreement, and precision. J. Syst. Softw. 2023, 198, 111575. [Google Scholar] [CrossRef]
- Feist, J.; Grieco, G.; Groce, A. Slither: A static analysis framework for smart contracts. In 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB); IEEE: New York, NY, USA, 2019; pp. 8–15. [Google Scholar]
- Zhang, J.; Wang, X.; Zhang, H.; Sun, H.; Wang, K.; Liu, X. A novel neural source code representation based on abstract syntax tree. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE); IEEE: New York, NY, USA, 2019; pp. 783–794. [Google Scholar]
- Rimsa, A.; Nelson Amaral, J.; Pereira, F.M. Practical dynamic reconstruction of control flow graphs. Softw. Pract. Exp. 2021, 51, 353–384. [Google Scholar] [CrossRef]
- Nandi, A.; Mandal, A.; Atreja, S.; Dasgupta, G.B.; Bhattacharya, S. Anomaly detection using program control flow graph mining from execution logs. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016, San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 215–224. [Google Scholar]
- Johnson, A.; Waye, L.; Moore, S.; Chong, S. Exploring and enforcing security guarantees via program dependence graphs. ACM SIGPLAN Not. 2015, 50, 291–302. [Google Scholar] [CrossRef]
- Liu, Z.; Tang, Z.; Zhang, J.; Xia, X.; Yang, X. Pre-training by predicting program dependencies for vulnerability analysis tasks. In IEEE/ACM 46th International Conference on Software Engineering; IEEE: New York, NY, USA, 2024; pp. 1–13. [Google Scholar]
- Khedker, U.; Sanyal, A.; Sathe, B. Data Flow Analysis: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Baldoni, R.; Coppa, E.; D’elia, D.C.; Demetrescu, C.; Finocchi, I. A survey of symbolic execution techniques. ACM Comput. Surv. (CSUR) 2018, 51, 1–39. [Google Scholar] [CrossRef]
- Rival, X.; Yi, K. Introduction to Static Analysis: An Abstract Interpretation Perspective; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
- Kant, G.; Laarman, A.; Meijer, J.; Van de Pol, J.; Blom, S.; Van Dijk, T. LTSmin: High-performance language-independent model checking. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems; Springer: Berlin/Heidelberg, Germany, 2015; pp. 692–707. [Google Scholar]
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef]
- Beyer, D. Software verification: 10th comparative evaluation (SV-COMP 2021). In International Conference on Tools and Algorithms for the Construction and Analysis of Systems; Springer International Publishing: Cham, Switzerland, 2021; pp. 401–422. [Google Scholar] [CrossRef]
- Mongiovì, M.; Giannone, G.; Fornaia, A.; Pappalardo, G.; Tramontana, E. Combining static and dynamic data flow analysis: A hybrid approach for detecting data leaks in Java applications. In Proceedings of the 30th Annual ACM Symposium on Applied Computing 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 1573–1579. [Google Scholar]
- Sherman, E.; Dwyer, M.B. Structurally defined conditional data-flow static analysis. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems; Springer International Publishing: Cham, Switzerland, 2018; pp. 249–265. [Google Scholar] [CrossRef]
- Späth, J.; Ali, K.; Bodden, E. Context-, flow-, and field-sensitive data-flow analysis using synchronized pushdown systems. Proc. ACM Program. Lang. 2019, 3, 1–29. [Google Scholar] [CrossRef]
- Trabish, D.; Mattavelli, A.; Rinetzky, N.; Cadar, C. Chopped symbolic execution. In Proceedings of the 40th International Conference on Software Engineering; Association for Computing Machinery: New York, NY, USA, 2018; pp. 350–360. [Google Scholar]
- Chen, Z.; Chen, Z.; Shuai, Z.; Zhang, G.; Pan, W.; Zhang, Y.; Wang, J. Synthesize solving strategy for symbolic execution. In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis; Association for Computing Machinery: New York, NY, USA, 2021; pp. 348–360. [Google Scholar]
- Cadar, C.; Nowack, M. KLEE symbolic execution engine in 2019. Int. J. Softw. Tools Technol. Transf. 2021, 23, 867–870. [Google Scholar] [CrossRef]
- Han, H.; Kyea, J.; Jin, Y.; Kang, J.; Pak, B.; Yun, I. Queryx: Symbolic query on decompiled code for finding bugs in COTS binaries. In 2023 IEEE Symposium on Security and Privacy (SP); IEEE: New York, NY, USA, 2023; pp. 3279–3295. [Google Scholar]
- Mirman, M.; Gehr, T.; Vechev, M. Differentiable abstract interpretation for provably robust neural networks. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Darais, D.; Might, M.; Van Horn, D. Galois transformers and modular abstract interpreters: Reusable metatheory for program analysis. ACM SIGPLAN Not. 2015, 50, 552–571. [Google Scholar] [CrossRef]
- Bertrane, J.; Cousot, P.; Cousot, R.; Feret, J.; Mauborgne, L.; Miné, A.; Rival, X. Static analysis and verification of aerospace software by abstract interpretation. Found. Trends® Program. Lang. 2015, 2, 71–190. [Google Scholar] [CrossRef]
- Babati, B.; Horváth, G.; Májer, V.; Pataki, N. Static analysis toolset with Clang. In Proceedings of the 10th International Conference on Applied Informatics 2017, Eger, Hungary, 30 January–1 February 2017. [Google Scholar]
- Biallas, S. Verification of Programmable Logic Controller Code Using Model Checking and Static Analysis. Ph.D. Thesis, RWTH Aachen University, Aachen, Germany, 2016. [Google Scholar]
- Nejati, F.; Abd Ghani, A.A.; Yap, N.K.; Jafaar, A.B. Handling state space explosion in component-based software verification: A review. IEEE Access 2021, 9, 77526–77544. [Google Scholar] [CrossRef]
- Kunze, S. Automated Test Case Generation for Function Block Diagrams Using Java Path Finder and Symbolic Execution. Master’s Thesis, Mälardalen University, Västerås, Sweden, 2015. [Google Scholar]
- Konnov, I.; Kukovec, J.; Tran, T.H. TLA+ model checking made symbolic. Proc. ACM Program. Lang. 2019, 3, 123. [Google Scholar] [CrossRef]
- Van Antwerpen, H.; Néron, P.; Tolmach, A.; Visser, E.; Wachsmuth, G. A constraint language for static semantic analysis based on scope graphs. In Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 49–60. [Google Scholar]
- Liang, T.; Reynolds, A.; Tsiskaridze, N.; Tinelli, C.; Barrett, C.; Deters, M. An efficient SMT solver for string constraints. Form. Methods Syst. Des. 2016, 48, 206–234. [Google Scholar] [CrossRef]
- Harmim, D.; Marcin, V.; Pavela, O. Scalable Static Analysis Using Facebook Infer. I, VI-B. 2019. Available online: https://d1wqtxts1xzle7.cloudfront.net/101074818/59-libre.pdf?1681478089=&response-content-disposition=inline%3B+filename%3DScalable_Static_Analysis_Using_Facebook.pdf&Expires=1771624343&Signature=V3uuHpCj-uJ-Qu22t1AX1McDuj6F4ue-Oz9A5oNw3GMz0djPDv1uaDnUNkGGXYgGXAekxlORexbEcsjCtRGEUrvRnlSBhjRf~QYERVJUWdc528rUegdZkQNT2nE4PrTfcIIOOEojR5ZfGqr~yGvdhRmt~AG8FUoMQsRo9hIgpwZFyc2OntK4IhKqA9TopSvqW5twWPl-OBlXS28SPGciHt-4D0EYf~Q-ddza9mBHfzjUvozkJ85GLhZZdq5qYx6q~yiYNu7LxcYXNxztgG7FBWpqNvRyOrZQTo4UwU9atgZP4yG2cEDp5VzduOkXwFDDQbY1e4RG3UdrHAf5wOIPdg__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA (accessed on 10 February 2026).
- Priya, S.; Zhou, X.; Su, Y.; Vizel, Y.; Bao, Y.; Gurfinkel, A. Verifying verified code. Innov. Syst. Softw. Eng. 2022, 18, 335–346. [Google Scholar] [CrossRef]
- Baier, D.; Beyer, D.; Chien, P.C.; Jakobs, M.C.; Jankola, M.; Kettl, M.; Wendler, P. Software verification with CPAchecker 3.0: Tutorial and user guide. In Formal Methods, Proceedings of the 26th International Symposium (FM 2024), Milan, Italy, 9–13 September 2024; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 543–570. [Google Scholar] [CrossRef]
- Beyer, D.; Lingsch-Rosenfeld, M. CPAchecker 4.0 as Witness Validator: (Competition Contribution). In International Conference on Tools and Algorithms for the Construction and Analysis of Systems; Springer Nature Switzerland: Cham, Switzerland, 2025; pp. 192–198. [Google Scholar] [CrossRef]
- Marcilio, D.; Bonifácio, R.; Monteiro, E.; Canedo, E.; Luz, W.; Pinto, G. Are static analysis violations really fixed? A closer look at realistic usage of sonarqube. In 2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC); IEEE: New York, NY, USA, 2019; pp. 209–219. [Google Scholar]
- Kirchner, F.; Kosmatov, N.; Prevosto, V.; Signoles, J.; Yakobowski, B. Frama-C: A software analysis perspective. Form. Asp. Comput. 2015, 27, 573–609. [Google Scholar] [CrossRef]
- Bessey, A.; Block, K.; Chelf, B.; Chou, A.; Fulton, B.; Hallem, S.; Henri-Gros, C.; Kamsky, A.; McPeak, S.; Engler, D. A few billion lines of code later: Using static analysis to find bugs in the real world. Commun. ACM 2010, 53, 66–75. [Google Scholar] [CrossRef]
- Zhao, J.; Zhu, K.; Yu, L.; Huang, H.; Lu, Y. Yama: Precise Opcode-Based Data Flow Analysis for Detecting PHP Applications Vulnerabilities. IEEE Trans. Inf. Forensics Secur. 2025, 20, 7748–7763. [Google Scholar]
- Luo, C.; Li, P.; Meng, W. TChecker: Precise Static Inter-Procedural Analysis for Detecting Taint-Style Vulnerabilities in PHP Applications. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS ‘22); Association for Computing Machinery: New York, NY, USA, 2022; pp. 2175–2188. [Google Scholar] [CrossRef]
- Beller, M.; Bholanath, R.; McIntosh, S.; Zaidman, A. Analyzing the state of static analysis: A large-scale evaluation in open source software. In 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER); IEEE: New York, NY, USA, 2016; Volume 1, pp. 470–481. [Google Scholar]
- Hajdu, Á.; Marescotti, M.; Suzanne, T.; Mao, K.; Grigore, R.; Gustafsson, P.; Distefano, D. InfERL: Scalable and extensible Erlang static analysis. In Proceedings of the 21st ACM SIGPLAN International Workshop on Erlang 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 33–39. [Google Scholar]
- Youn, D.; Lee, S.; Ryu, S. Declarative static analysis for multilingual programs using CodeQL. Softw. Pract. Exp. 2023, 53, 1472–1495. [Google Scholar] [CrossRef]
- Kosmatov, N.; Prevosto, V.; Signoles, J. Guide to Software Verification with Frama-C; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar] [CrossRef]
- Umann, K.; Porkoláb, Z. Towards Better Static Analysis Bug Reports in the Clang Static Analyzer. In 2025 IEEE/ACM 47th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP); IEEE: New York, NY, USA, 2025; pp. 170–180. [Google Scholar]
- Bhutani, V.; Toosi, F.G.; Buckley, J. Analysing the Analysers: An Investigation of Source Code Analysis Tools. Appl. Comput. Syst. 2024, 29, 98–111. [Google Scholar] [CrossRef]
- Thomson, P. Static Analysis: An Introduction: The fundamental challenge of software engineering is one of complexity. Queue 2021, 19, 29–41. [Google Scholar] [CrossRef]
- Samarasekara, P.; Hettiarachchi, R.; De Silva, D. A Comparative Analysis of Static and Dynamic Code Analysis Techniques. TechRxiv 2023. [Google Scholar] [CrossRef]
- Li, J. Vulnerabilities mapping based on OWASP-SANS: A survey for static application security testing (SAST). arXiv 2020, arXiv:2004.03216. [Google Scholar] [CrossRef]
- Wang, T.; Qin, S.; Chow, K.P. Towards vulnerability types classification using pure self-attention: A common weakness enumeration based approach. In 2021 IEEE 24th International Conference on Computational Science and Engineering (CSE); IEEE: New York, NY, USA, 2021; pp. 146–153. [Google Scholar]
- Mateo Tudela, F.; Bermejo Higuera, J.R.; Bermejo Higuera, J.; Sicilia Montalvo, J.A.; Argyros, M.I. On combining static, dynamic and interactive analysis security testing tools to improve owasp top ten security vulnerability detection in web applications. Appl. Sci. 2020, 10, 9119. [Google Scholar] [CrossRef]
- Li, Y.; Yao, P.; Yu, K.; Wang, C.; Ye, Y.; Li, S.; Luo, M.; Liu, Y.; Ren, K. Understanding Industry Perspectives of Static Application Security Testing (SAST) Evaluation. In Proceedings of the ACM on Software Engineering; Association for Computing Machinery: New York, NY, USA, 2025; pp. 3033–3056. [Google Scholar]
- Marashdih, A.W.; Zaaba, Z.F.; Suwais, K. An enhanced static taint analysis approach to detect input validation vulnerability. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 682–701. [Google Scholar] [CrossRef]
- Stoenescu, R.; Popovici, M.; Negreanu, L.; Raiciu, C. Symnet: Scalable symbolic execution for modern networks. In Proceedings of the 2016 ACM SIGCOMM Conference; Association for Computing Machinery: New York, NY, USA, 2016; pp. 314–327. [Google Scholar]
- Goseva-Popstojanova, K.; Perhinschi, A. On the capability of static code analysis to detect security vulnerabilities. Inf. Softw. Technol. 2015, 68, 18–33. [Google Scholar] [CrossRef]
- Kang, H.J.; Aw, K.L.; Lo, D. Detecting false alarms from automatic static analysis tools: How far are we? In Proceedings of the 44th International Conference on Software Engineering; Association for Computing Machinery: New York, NY, USA, 2022; pp. 698–709. [Google Scholar]
- Wadhams, Z.D.; Izurieta, C.; Reinhold, A.M. Barriers to using static application security testing (SAST) tools: A literature review. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering Workshops; Association for Computing Machinery: New York, NY, USA, 2024; pp. 161–166. [Google Scholar]
- Jaspan, C.; Jorde, M.; Knight, A.; Sadowski, C.; Smith, E.K.; Winter, C.; Murphy-Hill, E. Advantages and disadvantages of a monolithic repository: A case study at google. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice, Gothenburg, Sweden, 27 May–3 June 2018; pp. 225–234. [Google Scholar]
- Elder, S.; Zahan, N.; Kozarev, V.; Shu, R.; Menzies, T.; Williams, L. Structuring a comprehensive software security course around the OWASP application security verification standard. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET); IEEE: New York, NY, USA, 2021; pp. 95–104. [Google Scholar]
- Lee, S.U.; Dong, L.; Xing, Z.; Ahmed, M.E.; Avgoustakis, S. Software Security Mapping Framework: Operationalization of Security Requirements. arXiv 2025, arXiv:2506.11051. [Google Scholar]
- Kim, S.; Ryu, S. Analysis of blockchain smart contracts: Techniques and insights. In 2020 IEEE Secure Development (SecDev); IEEE: New York, NY, USA, 2020; pp. 65–73. [Google Scholar]
- Durieux, T.; Ferreira, J.F.; Abreu, R.; Cruz, P. Empirical review of automated analysis tools on 47,587 ethereum smart contracts. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering; Association for Computing Machinery: New York, NY, USA, 2020; pp. 530–541. [Google Scholar]
- Grishchenko, I.; Maffei, M.; Schneidewind, C. A semantic framework for the security analysis of ethereum smart contracts. In International Conference on Principles of Security and Trust; Springer International Publishing: Cham, Switzerland, 2018; pp. 243–269. [Google Scholar] [CrossRef]
- Atzei, N.; Bartoletti, M.; Cimoli, T. A survey of attacks on ethereum smart contracts (sok). In International Conference on Principles of Security and Trust 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 164–186. [Google Scholar]
- Tsankov, P.; Dan, A.; Drachsler-Cohen, D.; Gervais, A.; Buenzli, F.; Vechev, M. Securify: Practical security analysis of smart contracts. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security; Association for Computing Machinery: New York, NY, USA, 2018; pp. 67–82. [Google Scholar]
- Zhang, Q.; He, D.; Liu, H.; Shao, X. Adaptive Sliding Mode Security Control for Rotary Inverted Pendulum Against Randomly Occurring False Data Injection Attacks. IEEE Trans. Autom. Sci. Eng. 2025, 22, 17988–17999. [Google Scholar] [CrossRef]
- Zhang, Q.; He, D.; Li, X.; Liu, H.; Shao, X. Enhanced state-constrained adaptive fuzzy exact tracking control for nonlinear strict-feedback systems. Fuzzy Sets Syst. 2025, 522, 109598. [Google Scholar] [CrossRef]
- Panchal, P.; Hein, L.; Myschik, S.; Holzapfel, F. A Systematic and Agile Approach to Developing DO-178C Compliant Model-Based Safety-Critical Software; Deutsche Gesellschaft für Luft-und Raumfahrt-Lilienthal-Oberth eV: Bonn, Germany, 2024. [Google Scholar]
- Cousot, P.; Cousot, R.; Feret, J.; Mauborgne, L.; Miné, A.; Monniaux, D.; Blanchet, B. The Astrée Static Analyzer. 2015. Available online: http://www.astree.ens.fr (accessed on 10 February 2026).
- Voelter, M.; Kolb, B.; Birken, K.; Tomassetti, F.; Alff, P.; Wiart, L.; Wortmann, A.; Nordmann, A. Using language workbenches and domain-specific languages for safety-critical software development. Softw. Syst. Model. 2019, 18, 2507–2530. [Google Scholar] [CrossRef]
- Araujo, H.; Mousavi, M.R.; Varshosaz, M. Testing, validation, and verification of robotic and autonomous systems: A systematic review. ACM Trans. Softw. Eng. Methodol. 2023, 32, 1–61. [Google Scholar] [CrossRef]
- Giannaros, A.; Karras, A.; Theodorakopoulos, L.; Karras, C.; Kranias, P.; Schizas, N.; Kalogeratos, D.; Tsolis, D. Autonomous vehicles: Sophisticated attacks, safety issues, challenges, open topics, blockchain, and future directions. J. Cybersecur. Priv. 2023, 3, 493–543. [Google Scholar] [CrossRef]
- Drobnjaković, F.; Subotić, P.; Urban, C. Abstract interpretation-based data leakage static analysis. arXiv 2022, arXiv:2211.16073. [Google Scholar] [CrossRef]
- Dong, Z.; Andrzejak, A.; Shao, K. Practical and accurate pinpointing of configuration errors using static analysis. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME); IEEE: New York, NY, USA, 2015; pp. 171–180. [Google Scholar]
- Mitchell, S.; Potash, E.; Barocas, S.; D’Amour, A.; Lum, K. Algorithmic fairness: Choices, assumptions, and definitions. Annu. Rev. Stat. Its Appl. 2021, 8, 141–163. [Google Scholar] [CrossRef]
- Zhang, J.M.; Harman, M.; Ma, L.; Liu, Y. Machine learning testing: Survey, landscapes and horizons. IEEE Trans. Softw. Eng. 2020, 48, 1–36. [Google Scholar] [CrossRef]
- Barham, P.; Isard, M. Machine learning systems are stuck in a rut. In Proceedings of the Workshop on Hot Topics in Operating Systems; Association for Computing Machinery: New York, NY, USA, 2019; pp. 177–183. [Google Scholar]
- Zuo, C.; Lin, Z.; Zhang, Y. Why does your data leak? Uncovering the data leakage in cloud from mobile apps. In 2019 IEEE Symposium on Security and Privacy (SP); IEEE: New York, NY, USA, 2019; pp. 1296–1310. [Google Scholar]
- Humbatova, N.; Jahangirova, G.; Bavota, G.; Riccio, V.; Stocco, A.; Tonella, P. Taxonomy of real faults in deep learning systems. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1110–1121. [Google Scholar]
- Zafar, M.B.; Valera, I.; Rogriguez, M.G.; Gummadi, K.P. Fairness constraints: Mechanisms for fair classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Lauderdale, FL, USA, 20–22 April 2017; pp. 962–970. [Google Scholar]
- Dwarakanath, A.; Ahuja, M.; Sikand, S.; Rao, R.M.; Bose, R.J.C.; Dubash, N.; Podder, S. Identifying implementation bugs in machine learning based image classifiers using metamorphic testing. In Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis; Association for Computing Machinery: New York, NY, USA, 2018; pp. 118–128. [Google Scholar]
- Kafai, Y.; Biswas, G.; Hutchins, N.; Snyder, C.; Brennan, K.; Haduong, P.; DesPortes, K.; Fong, M.; Flood, V.J.; Aalst, O.W.-v.; et al. Turning bugs into learning opportunities: Understanding debugging processes, perspectives, and pedagogies. In Proceedings of the 14th International Conference of the Learning Sciences (ICLS 2020); Gresalfi, M., Horn, I.S., Eds.; International Society of the Learning Sciences: Bloomington, IN, USA, 2020; pp. 374–381. Available online: https://repository.isls.org/bitstream/1/6661/1/374-381.pdf (accessed on 10 February 2026).
- Belcastro, L.; Cantini, R.; Marozzo, F.; Orsino, A.; Talia, D.; Trunfio, P. Programming big data analysis: Principles and solutions. J. Big Data 2022, 9, 4. [Google Scholar] [CrossRef]
- Muske, T.; Serebrenik, A. Survey of approaches for handling static analysis alarms. In 2016 IEEE 16th International Working Conference on Source Code Analysis and Manipulation (SCAM); IEEE: New York, NY, USA, 2016; pp. 157–166. [Google Scholar]
- Gu, R.; Zuo, Z.; Jiang, X.; Yin, H.; Wang, Z.; Wang, L.; Wang, L.; Li, X.; Huang, Y. Towards Efficient Large-Scale Interprocedural Program Static Analysis on Distributed Data-Parallel Computation. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 867–883. [Google Scholar] [CrossRef]
- Guo, Z.; Tan, T.; Liu, S.; Liu, X.; Lai, W.; Yang, Y.; Li, Y.; Chen, L.; Dong, W.; Zhou, Y. Mitigating False Positive Static Analysis Warnings: Progress, Challenges, and Opportunities. IEEE Trans. Softw. Eng. 2023, 49, 5154–5188. [Google Scholar] [CrossRef]
- Habib, A.; Pradel, M. How many of all bugs do we find? A study of static bug detectors. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering; Association for Computing Machinery: New York, NY, USA, 2018; pp. 317–328. [Google Scholar]
- Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; El Ghaoui, L.; Jordan, M. Theoretically principled trade-off between robustness and accuracy. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Beyer, D.; Strejček, J. Improvements in software verification and witness validation: SV-COMP 2025. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems; Springer Nature Switzerland: Cham, Switzerland, 2025; pp. 151–186. [Google Scholar] [CrossRef]
- Amadini, R.; Gange, G.; Schachte, P.; Søndergaard, H.; Stuckey, P.J. Abstract interpretation, symbolic execution and constraints. In Recent Developments in the Design and Implementation of Programming Languages; Schloss Dagstuhl–Leibniz-Zentrum für Informatik: Wadern, Germany, 2020; pp. 1–7. [Google Scholar]
- Hancock, T.M.; Gross, S.; McSpadden, J.; Kushner, L.; Milne, J.; Hacker, J.; Walsh, R.; Hornbuckle, C.; Campbell, C.; Kobayashi, K. The DARPA millimeter wave digital arrays (MIDAS) program. In Proceedings of the 2020 IEEE BiCMOS and Compound Semiconductor Integrated Circuits and Technology Symposium (BCICTS), Monterey, CA, USA, 16–19 November 2020; pp. 1–4. [Google Scholar]
- Zampetti, F.; Scalabrino, S.; Oliveto, R.; Canfora, G.; Di Penta, M. How open source projects use static code analysis tools in continuous integration pipelines. In 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR); IEEE: New York, NY, USA, 2017; pp. 334–344. [Google Scholar]
- Christakis, M.; Bird, C. What developers want and need from program analysis: An empirical study. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering; Association for Computing Machinery: New York, NY, USA, 2016; pp. 332–343. [Google Scholar]
- Hilton, M.; Tunnell, T.; Huang, K.; Marinov, D.; Dig, D. Usage, costs, and benefits of continuous integration in open-source projects. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering; Association for Computing Machinery: New York, NY, USA, 2016; pp. 426–437. [Google Scholar]
- Gopstein, D.; Iannacone, J.; Yan, Y.; DeLong, L.; Zhuang, Y.; Yeh, M.K.C.; Cappos, J. Understanding misunderstandings in source code. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering; Association for Computing Machinery: New York, NY, USA, 2017; pp. 129–139. [Google Scholar]
- Liu, R.; Wang, Y.; Xu, H.; Sun, J.; Zhang, F.; Li, P.; Guo, Z. Vul-LMGNNs: Fusing language models and online-distilled graph neural networks for code vulnerability detection. Inf. Fusion 2025, 115, 102748. [Google Scholar] [CrossRef]
- Feng, Z.; Guo, D.; Tang, D.; Duan, N.; Feng, X.; Gong, M.; Shou, L.; Qin, B.; Liu, T.; Jiang, D.; et al. Codebert: A pre-trained model for programming and natural languages. arXiv 2020, arXiv:2002.08155. [Google Scholar]
- Godefroid, P.; Levin, M.Y.; Molnar, D. SAGE: Whitebox Fuzzing for Security Testing: SAGE has had a remarkable impact at Microsoft. Queue 2012, 10, 20–27. [Google Scholar] [CrossRef]
- Le Goues, C.; Nguyen, T.; Forrest, S.; Weimer, W. Genprog: A generic method for automatic software repair. IEEE Trans. Softw. Eng. 2011, 38, 54–72. [Google Scholar] [CrossRef]
- Chen, M. Evaluating large language models trained on code. arXiv 2021, arXiv:2107.03374. [Google Scholar] [CrossRef]
- Liu, Z.; Zhu, S.; Qin, B.; Chen, H.; Song, L. Automatically detecting and fixing concurrency bugs in go software systems. In Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems; Association for Computing Machinery: New York, NY, USA, 2021; pp. 616–629. [Google Scholar]
- Watt, C. Mechanising and verifying the WebAssembly specification. In Proceedings of the 7th ACM SIGPLAN International Conference on Certified Programs and Proofs; Association for Computing Machinery: New York, NY, USA, 2018; pp. 53–65. [Google Scholar]
- Hassan, S.; Bahsoon, R. Microservices and their design trade-offs: A self-adaptive roadmap. In 2016 IEEE International Conference on Services Computing (SCC); IEEE: New York, NY, USA, 2016; pp. 813–818. [Google Scholar]
- Alonso, J.; Piliszek, R.; Cankar, M. Embracing IaC through the DevSecOps philosophy: Concepts, challenges, and a reference framework. IEEE Softw. 2022, 40, 56–62. [Google Scholar] [CrossRef]



| Database/Search Engine | Type | URL |
|---|---|---|
| Scopus | Search Engine | https://www.scopus.com |
| Google Scholar | Search Engine | https://scholar.google.com |
| IEEE Xplore | Digital Library | https://ieeexplore.ieee.org |
| ACM Digital Library | Digital Library | https://dl.acm.org |
| SpringerLink | Digital Library | https://link.springer.com |
| arXiv | Preprint Repository | https://arxiv.org |
| Inclusion Criteria | Exclusion Criteria |
|---|---|
| Published in one of the selected databases (Table 1) | Not written in English |
| Published between 2015 and 2025 to ensure relevance. | Published before 2015 (with exceptions for seminal, foundational works). |
| At least one search term appears in the title, abstract, or keywords | Full text not available |
| Addresses or analyzes the stated research questions | Lacks technical detail or is a duplicate of another study |
| Directly addresses the techniques, applications, or evaluations of static analysis. | Focuses exclusively on dynamic analysis without a static analysis component. |
| Provides sufficient technical depth, empirical results, or a novel theoretical framework | Lacks technical detail, experimental validation or a clear methodology. |
| Tool | Type | Core Technique | Soundness | Key Strength | Key Limitation | Primary Domain |
|---|---|---|---|---|---|---|
| SonarQube | Industrial/OSS | Data-flow, Pattern Matching | No [50] | Broad language support, CI/CD integration | Higher false positives (30–50% in Java projects [50]), limited depth | Multi-language code quality |
| CodeQL | Industrial/OSS | Symbolic Execution, Constraint Solving | Partial [35] | Deep semantic analysis, security-focused; used to find ~70% of critical CVEs at GitHub (CodeQL version 2.6) [35] | Steep learning curve, resource-intensive | Security vulnerability detection |
| Clang Static Analyzer | Academic/OSS | Abstract Interpretation, Symbolic Execution | Partial [39] | Path-sensitive, integrates with LLVM | C-family only, moderate false positives | C/C++ systems programming |
| Astrée | Academic/Industrial | Abstract Interpretation | Yes [38] | Sound verification of runtime errors; proven absence of RTE in 132 K LoC Airbus A380 flight control software [38] | Requires expert configuration; primarily for C | Safety-critical aerospace/automotive |
| KLEE | Academic | Symbolic Execution | Partial [34] | Finds deep path-sensitive bugs. Achieved 90%-line coverage on GNU Coreutils [34] | Path explosion, scalability limits | C/LLVM systems testing |
| CPAchecker | Academic | Predicate Abstraction, Model Checking | Yes [48] | Configurable, strong competition results. (multiple SV-COMP awards [49]) | High memory usage | Software verification (C, Java) |
| Frama-C | Academic | Abstract Interpretation, Value Analysis | Yes [51] | Suite for critical C code analysis. Interoperable analyzers for value, dependency, and WP calculus [51] | Requires expertise to configure | Safety-critical C code |
| SeaHorn | Academic | Constrained Horn Clause Solving | Yes [47] | Scalable verification of systems code | Focuses on safety properties | C program verification |
| Infer | Academic/Industrial | Separation Logic, Bi-Abduction | Yes [46] | Compositional analysis, low false positives (80%+ precision at Facebook scale [46]) | Focused on memory/resource bugs | Mobile/system memory safety |
| Coverity | Industrial | Advanced Inter-procedural Analysis | Yes [52] | Whole-program analysis, certifications | Closed source, limited extensibility | Enterprise, safety-critical |
| Yama | Academic | Opcode-based Data-Flow Analysis | Partial [53] | High precision for PHP vulnerabilities | PHP-specific | PHP web application security |
| Tchecker | Academic | Inter-procedural Taint Analysis | Partial [54] | Scalable taint tracking for PHP | PHP-specific | PHP security analysis |
| ESLint/PMD | OSS | Pattern Matching, AST Analysis | No [55] | Fast, IDE integration, customizable (100 K LoC in <10 s [55]) | Syntax-level only, no semantic analysis | Web development, code style |
| Application Domain | Recommended Techniques | Rationale | Representative Tools | Key Considerations | Typical Analysis Requirements |
|---|---|---|---|---|---|
| Enterprise Web Applications | Data-flow analysis, Pattern matching | Speed/coverage balance for DevOps | SonarQube, ESLint, PMD | CI/CD integration, multi-language support, fast feedback | Medium precision, High scalability, DevOps integration |
| Security-Critical Systems | Symbolic execution, Constraint solving | High precision for complex vulnerabilities | CodeQL, Coverity, Klocwork | Vulnerability detection, Low false negatives, Compliance | High precision, Deep analysis, Security rulesets |
| Safety-Critical Embedded | Abstract interpretation, Model checking | Soundness for standards compliance | Astrée, Clang Static Analyzer, Frama-C | Soundness guarantees, Certification support, MISRA compliance | High soundness, Standards compliance, Low false positives |
| Mobile Applications | Constraint solving, Data-flow analysis | Scalable memory safety analysis | Infer, Android Lint, SonarQube | Memory safety, Performance, Battery impact detection | Medium precision, Fast analysis, Resource leak detection |
| Systems Programming | Symbolic execution, Abstract interpretation | Deep bugs + sound verification | Clang Static Analyzer, KLEE, Coverity | Memory safety, Concurrency, Low-level code analysis | Deep analysis, Path sensitivity, Pointer analysis |
| Academic/ Research | All techniques (custom implementations) | Flexibility for algorithm research | CPA checker, SeaHorn, Frama-C | Extensibility, Algorithm experimentation, Formal methods | Research flexibility, Latest techniques, Customization |
| Technique | Soundness | Scalability | Evidence/Key Study | Primary Use Cases |
|---|---|---|---|---|
| Data-Flow Analysis | Medium | High | Flow-insensitive approximations enable scalability for large codebases but limit precision [30]. SonarQube achieves analysis speeds of ~10 K LoC/s [50]. | Code quality, Bug patterns, Style checks |
| Symbolic Execution | High (on paths) | Low-Medium | The path explosion problem fundamentally limits scalability [33], but it is effective for deep bug finding in KLEE. KLEE achieved 90% line coverage of GNU Coreutils [34]. | Security vulnerabilities, Complex logic bugs |
| Abstract Interpretation | High | Medium | Sound over-approximation guarantees completeness at cost of false positives [36]; used in Astrée for avionics. Astrée demonstrated the absence of RTE in 132 K LoC of Airbus flight control software [38]. | Safety verification, Runtime error proof |
| Model Checking | High | Low | The state explosion problem restricts the application to abstracted models [41], which is the standard for protocol verification. State space limited to ~106 states without abstraction [41]. | Concurrency, Protocol verification |
| Constraint Solving | High | Low-Medium | SMT solving enables precise verification but faces computational complexity barriers [44]. Z3 solves constraints with 104+ clauses in seconds [45]. | Complex properties, Type verification |
| Pattern Matching | Low | Very High | Syntax-level matching provides fast feedback but lacks semantic understanding [50]. ESLint analyzes 100 K LoC in <10 s [55]. | Syntax checks, Style enforcement |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Iavich, M.; Kuchukhidze, T.; Lopata, A. Static Analysis Techniques for Embedded, Cyber-Physical, and Electronic Software Systems: A Comprehensive Survey. Electronics 2026, 15, 918. https://doi.org/10.3390/electronics15050918
Iavich M, Kuchukhidze T, Lopata A. Static Analysis Techniques for Embedded, Cyber-Physical, and Electronic Software Systems: A Comprehensive Survey. Electronics. 2026; 15(5):918. https://doi.org/10.3390/electronics15050918
Chicago/Turabian StyleIavich, Maksim, Tamari Kuchukhidze, and Audrius Lopata. 2026. "Static Analysis Techniques for Embedded, Cyber-Physical, and Electronic Software Systems: A Comprehensive Survey" Electronics 15, no. 5: 918. https://doi.org/10.3390/electronics15050918
APA StyleIavich, M., Kuchukhidze, T., & Lopata, A. (2026). Static Analysis Techniques for Embedded, Cyber-Physical, and Electronic Software Systems: A Comprehensive Survey. Electronics, 15(5), 918. https://doi.org/10.3390/electronics15050918

