Perspectives on Safety for Autonomous Vehicles
Abstract
1. Introduction
- Design Artifacts: In semiconductors, design artifacts are inserted to enable the process of componentization and structured composition. This methodology effectively allows one to build/verify components and be assured they will still work when composed at a higher level.
- Integration Mathematics: The integration of components is defined by a formal mathematics, and the combination builds a formal abstraction of reality which can be verified quickly and efficiently.
- Hierarchy of Abstractions: Abstractions can scale on top of each other such that the higher level abstraction can represent a much larger population of potential targets.
- Complexity: The intuition is that cyber-physical systems are more complex and must deal with a more unpredictable environment. However, on all the fundamental points related to V&V, semiconductors have a direct analogy to cyber-physical systems. Semiconductors, like cyber-physical systems, fundamentally live in the world of physics (electromagnetics vs. Newtonian physics) and must comprehend safe ODDs. Testing semiconductors also involves thinking through complex scenarios. In fact, because semiconductors live in nano second timeframes, the number of executed scenarios is quite significant. One of the major differences between semiconductors and cyber-physical systems is the relative scale where semiconductors must contend with billions of objects.
- System Safety: The other intuition is that semiconductors are actually part of bigger cyber-physical systems. Thus, a safety protocol such as ISO 26262 must necessarily include semiconductors, and if so, what is the added value? In fact, when process oriented protocols such as ISO 26262 are applied to semiconductors, they focus on manufacturing faults and higher level system function. Critically, the deep mathematical processes which were employed to build the chip are abstracted away.
- V&V Traditional Methods: This section takes a first-principle analysis of the existing V&V methods at the center of safety protocols in the mechanical and information technology space, and shows their evolution with early integration.
- Autonomy Safety Current Approaches: This section discusses the open issues to the deepest forms of integration and introduction of artificial intelligence components.
- Semiconductor Inspired Techniques: The last section introduces the use of semiconductor inspired techniques with three specific proposals to address the open issues in autonomy V&V.
2. V&V Traditional Methods
2.1. Traditional Physics-Based Execution
- (1)
- Scenario Generation: One need only worry about the state space constrained by the laws of physics. Thus, objects which defy gravity cannot exist. Every actor is explicitly constrained by the laws of physics.
- (2)
- Monotonicity: In many interesting dimensions, there are strong properties of monotonicity. As an example, if one is considering stopping distance for braking, there is a critical speed above which there will be an accident. Critically, all the speed bins below this critical speed are safe and do not have to be explored.
- Failure mechanisms are identified;
- A test and safety argument is built to address the failure mechanism;
- There is an active process by a regulator (or documentation for self-regulation) which evaluates these two, and acts as a judge to approve/decline.
- Constrained and well-behaved space for scenario test generation.
- Expensive physics based simulations.
- Regulations focused on mechanical failure.
- In safety situations, regulations focused on a process to demonstrate safety with a key idea of design assurance levels.
2.2. Traditional Decision-Based Execution
- (1)
- Code Coverage: Here, the structural specification of the virtual model is used as a constraint to help drive the test generation process. This is carried out with software or hardware (Register Transfer Level (RTL) code).
- (2)
- Structured Testing: A process of component, subsection, and integration testing has been developed to minimize propagation of errors.
- (3)
- Design Reviews: Structured design reviews with specs and core are considered best practice.
- Known knowns—Issues that are both identified and understood, representing defects detectable through standard debugging or verification processes.
- Known unknowns—Anticipated risks or potential failure modes whose specific manifestations or root causes remain uncertain but are targeted through exploratory or stress testing.
- Unknown unknowns—Completely unanticipated issues that emerge without prior awareness, often revealing fundamental gaps in system design, understanding, or testing scope.
- (1)
- Unconstrained and not well-behaved execution space for scenario test generation.
- (2)
- Generally, less expensive simulation execution (no physical laws to simulate).
- (3)
- V&V focused on logical errors not mechanical failure.
- (4)
- Generally, no defined regulatory process for safety critical applications. Most software is “best efforts”.
- (5)
- “Unknown-unknowns” a key focus of validation.
2.3. Mixed Domain Architectures
- Mechanical Replacement—characterized by a large PBE component and a small DBE component.
- Electronic Adjacent—where PBE and DBE systems operate as separate but interacting domains.
- Autonomy—dominated by the DBE component, with the PBE portion playing a supporting role.
- Layer 4 (Core Physics): the physical world governed by deterministic, well-understood PBE properties.
- Layer 3 (Actuation and Edge Sensing): traditional control and sensor interfaces that largely retain PBE characteristics.
- Layer 2 (Computation and AI): software-intensive DBE components responsible for inference, prediction, and decision-making.
- Layer 1 (Design-for-Experiment V&V Layer): the outermost assurance framework, which must validate a system with inherently PBE behaviors through an interface dominated by stochastic, data-driven DBE-AI processes.

3. Autonomy V&V Current Approaches
- System Design Process—A structured development-assurance approach for complex systems, integrating safety certification within the broader system-engineering process.
- Formalization—The explicit definition of system operating conditions, intended functionalities, expected behaviors, and associated risks or hazards requiring mitigation.
- Lifecycle Management—The management of components, systems, and development processes across the entire product lifecycle, from concept through decommissioning.
- AI Specification—How do we formally specify and bound the expected behavior of a system whose logic emerges from data rather than design [20]?
- Intelligent Scaling—How do we ensure system reliability, safety, and explainability as AI systems scale in complexity, data volume, and autonomy?
3.1. AI Component Validation
- Training Set Validation—Because the learned AI component is often too complex for direct formal analysis, one strategy is to examine the training dataset in relation to the Operational Design Domain (ODD). The objective is to identify “gaps” or boundary conditions where the data distribution may not adequately represent real-world scenarios, and to design targeted tests that expose these weaknesses [22].
- Robustness to Noise—Another approach, supported by both simulation and formal methods [23], is to assert higher-level system properties and test the model’s adherence to them. For instance, in object recognition, one might assert that an object should be correctly classified regardless of its orientation or lighting conditions.
3.2. AI Specification
- Making a convincing completeness argument for all foreseeable scenarios.
- Developing machinery for conformance checking against the standard.
- Connecting these assumptions to a liability and governance framework [25].
- Full Driving Capability—The AI Driver must handle the entire driving task, encompassing perception (environment sensing), decision-making (planning and response), and control (executing maneuvers such as steering or braking), including social norms and unanticipated events.
- Safety Assurance—AVs must be subject to rigorous safety processes similar to those used in aviation, including failure analysis, risk management, and fallback safety.
- Human Equivalence—The AI Driver should meet or exceed the performance of a competent human driver, obeying traffic laws, handling rare “edge cases,” and maintaining continuous situational awareness.
- Ethical and Legal Responsibility—The system must operate within moral and legal frameworks, including handling ethically charged scenarios and questions of liability.
- Testing and Validation—Robust testing, simulation, and real-world trials are essential to validate performance across diverse driving conditions, including long-tail and edge-case scenarios.
Current State of Specification Practice
3.3. Intelligent Test Generation
3.3.1. Physical Testing
- The rate of novel scenario discovery in real-world driving is inherently slow.
- The geographical and demographic bias of Tesla’s fleet restricts coverage to markets where the vehicles are deployed.
- The process of data capture, error identification, and corrective model update is complex and time-intensive—analogous to debugging computers by analyzing crash logs after failure.
3.3.2. Real-World Seeding
3.3.3. Virtual Testing
3.3.4. Process-Centric Limitations
4. Semiconductor V&V as Inspiration for Cyber-Physical Research
- (1)
- Guardian Accelerant Model.
- (2)
- Functional Decomposition.
- (3)
- Pseudo Physical Scaling Abstraction.
4.1. Guardian Accelerant Model
- (1)
- Training Set Bounding: The core power of AI is to predict reasonable approximations between training points with high probability. However, there is also the idea that the training set is not complete relative to the current situation. In this decomposition, the AI algorithms can continue to be optimized for best guess, but the Guardian can be configured for the bounding box of expectations of the AI.
- (2)
- Validation and Verification: Given a paradigm for a Guardian and well-established rules for interaction between the Guardian and the AV/AI, a large part of the safety focus moves to the Guardian, a somewhat simpler problem. The V&V for the AV moves a very hard but non-safety critical problem of performance validation.
- (3)
- Regulation: A very natural role for regulation and standards would be specify the bounds for the Guardian while leaving the performance optimization to industry.
4.2. Functional Decomposition
- (1)
- Invariants: The PBE word implies invariants such as real-world objects can only move so fast and cannot float or disappear or that important objects (cars) can be successfully perceived in any orientation. The invariants can be part of a broader anti-spec and basis of a validation methodology.
- (2)
- PBE World Model: A standard for describing the static and dynamic aspects of a PBE world model are interesting. If such a standard existed, both the active actors as well as infrastructure could contribute to building it. In this universal world model, any of the actors could communicate safety hazards to all the players through a V2X communication paradigm. Note, a universal world model (annotated by source) becomes a very good risk predictor when compared to the world model built from the host cyber-physical system.
- (3)
- Intelligent Test Generation: With a focus on the underlying PBE, test generation can focus on transformations to the PBE state graph and the task of the PBE/DBE differences can be handled by other mechanisms such as described in #2.
4.3. Pseudo Physical Scaling Abstractions
- (1)
- Design Constraints: Semiconductors insert design artifacts which enable the properties of separation-of-concerns and abstraction. These design artifacts are often at the cost of performance, power, and product cost.
- (2)
- Mathematics: The constrained design now can use higher level more abstract mathematical forms which have better properties of scalability for both design and verification.
- (3)
- Abstraction and Synthesis: The higher level of mathematics is supported by processes of abstraction, building the mathematical form from a lower level, and synthesis, building the lower form from the higher abstraction.
- (4)
- Global Abstraction Enforcement: A robust machinery validates the key aspects which enable the higher level abstraction.
- (1)
- Maxwell Equation level Validation can be limited at the component level.
- (2)
- Electrical V&V can happen at the circuit graph level.
- (3)
- By design and through various tool verification, isolation properties are enforced.
- (1)
- Components are exhaustively verified and assumed to be correct at higher levels of abstraction.
- (2)
- Separation is imposed by design (separation) and validated through global design tools.
- (3)
- Interconnect is the only device which can connect components through design.
- (4)
- The collection of components and interconnect is validated at a higher level.
- (5)
- This process is recursively performed at higher levels of abstraction.
- (1)
- Functional Abstract Interfaces: Building abstract interfaces between the major functional aspects of the AV stack (sensing, perception, location, etc.) would seem to offer some ability to do divide-and-conquer. This is not generally carried out today, and the approach of “end-to-end” AI stacks philosophically runs against these sorts of decompositions.
- (2)
- Execution Semantics: As Figure 1 shows, the cyber-physical paradigm consists of four layers of functionality. The inner core, layer 4, is of course the world of physics which has all the nice execution properties. Layer 3 consists of the traditional actuation and edge sensing functionality which maintains nice physical properties. As we go to layer 2, there is a combination of software and AI which operate in the digital world. Finally, the outer design for the experiment V&V layer has the unique challenge of testing a system with fundamentally physical properties but doing so through a layer dominated by digital properties. Research on performing intelligent test generation which permeates through layer 2 could help break the logjam of current issues.
5. Conclusions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A. Governance and Verification & Validation
- Operational Design Domain (ODD): Defines the environmental conditions, operational scenarios, and system boundaries within which the product is intended to function safely and effectively.
- Coverage: Represents the degree to which the product has been validated across the entirety of its ODD, thereby quantifying the completeness of testing.
- Field Response: Specifies the mechanisms and procedures through which design shortcomings are identified, analyzed, and corrected following field incidents to prevent recurrence and mitigate future harm.

- Test Generation: From the specified ODD, relevant test scenarios are generated to exercise system functionality across the intended operational envelope.
- Execution: Each test is executed on the product under development, representing a functional transformation that produces measurable outputs.
- Criteria for Correctness: The resulting outputs are evaluated against explicit criteria that define success or failure, ensuring objective assessment of conformance.

Appendix B. Example of Traditional ISO 26262
- (1)
- Define Safety Goals and Requirements (Concept Phase): Hazard Analysis and Risk Assessment (HARA): Identify potential hazards related to the braking system (e.g., failure to stop the vehicle, uncommanded braking). Assess risk levels using parameters like severity, exposure, and controllability. Define Automotive Safety Integrity Levels (ASIL) for each hazard (ranging from ASIL A to ASIL D, where D is the most stringent). Define safety goals to mitigate hazards (e.g., ensure sufficient braking under all conditions).
- (2)
- Develop Functional Safety Concept: Translate safety goals into high-level safety requirements for the braking system. Ensure redundancy, diagnostics, and fail-safe mechanisms are incorporated (e.g., dual-circuit braking or electronic monitoring).
- (3)
- System Design and Technical Safety Concept: Break down functional safety requirements into technical requirements, design the braking system with safety mechanisms like Hardware {e.g., sensors, actuators}. Software (e.g., anti-lock braking algorithms). Implement failure detection and mitigation strategies (e.g., failover to mechanical braking if electronic control fails).
- (4)
- Hardware and Software Development: Hardware Safety Analysis (HSA): Validate that components meet safety standards (e.g., reliable braking sensors). Software Development and Validation: Use ISO 26262-compliant processes for coding, verification, and validation. Test braking algorithms under various conditions.
- (5)
- Integration and Testing: Perform verification of individual components and subsystems to ensure they meet technical safety requirements. Conduct integration testing of the complete braking system, focusing on: Functional tests (e.g., stopping distance), Safety tests (e.g., behavior under fault conditions), and Stress and environmental tests (e.g., heat, vibration).
- (6)
- Validation (Vehicle Level): Validate the braking system against safety goals defined in the concept phase. Perform real-world driving scenarios, edge cases, and fault injection tests to confirm safe operation. Verify compliance with ASIL-specific requirements.
- (7)
- Production, Operation, and Maintenance: Ensure production aligns with validated designs, implement operational safety measures (e.g., periodic diagnostics, maintenance), monitor and address safety issues during the product’s lifecycle (e.g., software updates).
- (8)
- Confirmation and Audit: Use independent confirmation measures (e.g., safety audits, assessment reviews) to ensure the braking system complies with ISO 26262.
Appendix C. Electronics Megatrends and Progression of Computing


References
- Othman, K. Exploring the implications of autonomous vehicles: A comprehensive review. Innov. Infrastruct. Solut. 2022, 7, 165–196. [Google Scholar] [CrossRef]
- Razdan, R. Unsettled Technology Areas in Autonomous Vehicle Test and Validation EPR2019001; SAE International: Warrendale, PA, USA, 2019. [Google Scholar]
- Razdan, R. Product Assurance in the Age of Artificial Intelligence; SAE International Research Report EPR2025011; SAE International: Warrendale, PA, USA, 2025. [Google Scholar] [CrossRef]
- ISO 26262; Road Vehicles Functional Safety. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/publication/PUB200262.html (accessed on 1 November 2025).
- Moore, G.E. Cramming More Components onto Integrated Circuits. Electronics 1965, 38, 114–117. [Google Scholar] [CrossRef]
- Lavagno, L.; Markov, I.L.; Martin, G.; Scheffer, L.K. (Eds.) Electronic Design Automation for Integrated Circuits Handbook, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2022; Volume 2, ISBN 978-1-0323-3998-6. [Google Scholar]
- Wang, L.-T.; Chang, Y.-W.; Cheng, K.-T. (Eds.) Electronic Design Automation: Synthesis, Verification, and Test; Morgan Kaufmann/Elsevier: Amsterdam, The Netherlands, 2009; ISBN 978-0-12-374364-0. [Google Scholar]
- Jansen, D. (Ed.) The Electronic Design Automation Handbook; Springer: Berlin/Heidelberg, Germany, 2010; ISBN 978-1-4419-5369-8. [Google Scholar]
- AS9100; Quality Systems—Aerospace—Model for Quality Assurance in Design, Development, Production, Installation and Servicing. SAE International: Warrendale, PA, USA, 1999. Available online: https://www.sae.org/standards/content/as9100/ (accessed on 1 November 2025).
- TÜV SÜD. IECEX International Certification Scheme—Testing & Inspection. Available online: https://www.tuvsud.com/en-us/services/product-certification/iecex-international-certification-scheme (accessed on 1 November 2025).
- Dassault Aviation. Dassault Aviation, a Major Player to Aeronautics. 2025. Available online: https://www.dassault-aviation.com/en/ (accessed on 1 November 2025).
- Siemens. Siemens Digital Industries Software. Simcenter Tire Software. Available online: https://plm.sw.siemens.com/en-US/simcenter/mechanical-simulation/tire-simulation (accessed on 19 March 2025).
- Price, D. Pentium FDIV flaw—Lessons learned. IEEE Micro 1995, 15, 86–88. [Google Scholar] [CrossRef]
- CMMI Product Team. CMMI for Development, Version 1.3; Technical Report CMU/SEI-2010-TR-033; Carnegie Mellon University, Software Engineering Institute’s Digital Library: Pittsburgh, PA, USA, 2010. [Google Scholar] [CrossRef]
- Hellebrand, S.; Rajski, J.; Tarnick, S.; Courtois, B.; Figueras, J. Random Testing of Digital Circuits: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar] [CrossRef]
- ISO/PAS 21448:2019; Road Vehicles—Safety of the Intended Functionality. International Organization for Standardization: Geneva, Switzerland, 2019.
- DO-178C; Software Considerations in Airborne Systems and Equipment Certification. RTCA Incorporated: Washington, DC, USA, 2011. Available online: https://www.rtca.org/training/do-178c-training/ (accessed on 1 November 2025).
- Varshney, K.R. Engineering Safety in Machine Learning. In Proceedings of the 2016 Information Theory and Applications Workshop (ITA), La Jolla, CA, USA, 31 January–5 February 2016; pp. 1–5. [Google Scholar] [CrossRef]
- Schwalbe, G.; Schels, M. A Survey on Methods for the Safety Assurance of Machine Learning Based Systems. In Computer Safety, Reliability, and Security (SAFECOMP 2020); LNCS 12234; Springer: Berlin/Heidelberg, Germany, 2020; pp. 263–275. [Google Scholar] [CrossRef]
- Batarseh, F.A.; Freeman, L.; Huang, C.H. A survey on artificial intelligence assurance. J. Big Data 2021, 8, 60. [Google Scholar] [CrossRef]
- ISO/PAS 8800:2024; Road Vehicles—Safety and Artificial Intelligence. International Organization for Standardization (ISO): Geneva, Switzerland, 2024. Available online: https://www.iso.org/standard/83303.html (accessed on 1 November 2025).
- Renz, J.; Oehm, L.; Klusch, M. Towards Robust Training Datasets for Machine Learning with Ontologies: A Case Study for Emergency Road Vehicle Detection. arXiv 2024, arXiv:2406.15268. [Google Scholar] [CrossRef]
- Kouvaros, P.; Leofante, F.; Edwards, B.; Chung, C.; Margineantu, D.; Lomuscio, A. Verification of semantic key point detection for aircraft pose estimation. In Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning, Rhodes, Greece, 2–8 September 2023. [Google Scholar] [CrossRef]
- IEEE Std 2846-2022; IEEE Standard for Assumptions in Safety-Related Models for Automated Driving Systems. IEEE: Piscataway, NJ, USA, 2022.
- National Institute of Standards and Technology (NIST). AI Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce, 2023. Available online: https://www.nist.gov/itl/ai-risk-management-framework (accessed on 1 November 2025).
- Koopman, P.; Widen, W.H. A Reasonable Driver Standard for Automated Vehicle Safety. Legal Studies Research Paper No. 4475181; University of Miami School of Law; Available online: https://www.researchgate.net/publication/373919605_A_Reasonable_Driver_Standard_for_Automated_Vehicle_Safety (accessed on 8 October 2025).
- Insurance Institute for Highway Safety. First Partial Driving Automation Safeguard Ratings Show Industry Has Work to Do. March 2024. Available online: https://www.iihs.org/news/detail/first-partial-driving-automation-safeguard-ratings-show-industry-has-work-to-do (accessed on 1 November 2025).
- Insurance Institute for Highway Safety. IIHS Creates Safeguard Ratings for Partial Automation. IIHS-HLDI Crash Testing and Highway Safety. 2022. Available online: https://www.iihs.org/news/detail/iihs-creates-safeguard-ratings-for-partial-automation (accessed on 1 November 2025).
- National Highway Traffic Safety Administration. Final Rule: Automatic Emergency Braking Systems for Light Vehicles. April 2024. Available online: https://www.nhtsa.gov/sites/nhtsa.gov/files/2024-04/final-rule-automatic-emergency-braking-systems-light-vehicles_web-version.pdf (accessed on 1 November 2025).
- Uvarov, T.; Tripathi, B.; Fainstain, E.; Tesla Inc. Data Pipeline and Deep Learning System for Autonomous Driving. US11215999B2, 20 June 2018. Available online: https://patents.google.com/patent/US11215999B2/en (accessed on 1 November 2025).
- Tesla. Tesla Dojo Supercomputer: Training AI for Autonomous Vehicles; Tesla: Austin, TX, USA, 2021. [Google Scholar]
- Winner, H.; Lemmer, K.; Form, T.; Mazzega, J. PEGASUS—First Steps for the Safe Introduction of Automated Driving; Springer Nature: Berlin/Heidelberg, Germany, 2019; pp. 185–195. [Google Scholar] [CrossRef]
- The Global Initiative for Certifiable AV Safety. Available online: https://safetypool.ai (accessed on 1 November 2025).
- ASAM, e.V. ASAM OpenSCENARIO® DSL. 2024. Available online: https://www.asam.net/standards/detail/openscenario-dsl/ (accessed on 1 November 2025).
- UL 4600; Standard for Safety for the Evaluation of Autonomous Products. (3rd ed.). UL Standards & Engagement: Evanston, IL, USA, 2023.
- Diaz, M.; Woon, M. DRSPI—A Framework for Preserving Automated Vehicle Safety Claims by Unknown Unknowns Recognition and Dynamic Runtime Safety Performance Indicator Improvement; SAE Technical Papers on CD-ROM/SAE Technical Paper Series; SAE International: Warrendale, PA, USA, 2022. [Google Scholar] [CrossRef]
- Goldberg, D. What every computer scientist should know about floating-point arithmetic. ACM Comput. Surv. 1991, 23, 5–48. [Google Scholar] [CrossRef]
- Nagel, L.W.; Pederson, D.O. SPICE (Simulation Program with Integrated Circuit Emphasis); Technical Report No. UCB/ERL M382; EECS Department, University of California: Berkeley, CA, USA, 1973. [Google Scholar]
- University of California, Berkeley. SPICE3F5: Simulation Program with Integrated Circuit Emphasis; Electronics Research Laboratory, University of California: Berkeley, CA, USA, 1993. [Google Scholar]
- Shannon, C.E. A symbolic analysis of relay and switching circuits. Trans. Am. Inst. Electr. Eng. 1938, 57, 713–723. [Google Scholar] [CrossRef]
- Razdan, R. Unsettled Topics Concerning Automated Driving Systems and the Transportation Ecosystem EPR2019005; SAE International: Warrendale, PA, USA, 2019. [Google Scholar]
- Ross, K. Product Liability Law and Its Effect on Product Safety; In Compliance Magazine: Littleton, MA, USA, 2023; Available online: https://incompliancemag.com/product-liability-law-and-its-effect-on-product-safety/ (accessed on 1 November 2025).
- Forsberg, K.; Mooz, H. The relationship of system engineering to the project cycle. In Proceedings of the First Annual Symposium of National Council on System Engineering, Chattanooga, TN, USA, 20–23 October 1991; Volume 1, pp. 57–65. [Google Scholar] [CrossRef]

| Aspect | ISO 26262 | SOTIF |
|---|---|---|
| Focus | System faults and malfunctions | Hazards due to functional insufficiencies |
| Applicability | All safety-critical systems | Primarily ADAS and autonomous systems |
| Hazard Source | Hardware and software failure | Limitations in functionality, unknown scenarios |
| Methods | Fault avoidance and control | Scenario-based testing |
| Conventional Algorithm | ML Algorithms | Comment |
|---|---|---|
| Logical Theory | No Theory | In conventional algorithms, one needs a theory of operation to implement the solution. ML algorithms can often “work” without a clear understanding of exactly why they work. |
| Analyzable | Not Analyzable | Conventional algorithms are encoded in a way one can see and analyze the software code. Most validation and verification methodologies rely on this ability to find errors. ML algorithms offer no such ability, and this leaves a large gap in validation. |
| Causal | Correlation | Conventional algorithms have built in causality and ML algorithms discover correlations. The difference is important if one wants to reason at a higher level. |
| Deterministic | Non-Deterministic | Conventional algorithms are deterministic in nature, and ML algorithms are fundamentally probabilistic in nature. |
| Known Computational Complexity | Unknown Computational Complexity | Given the analyzable nature of conventional algorithms, one can build a model for computational complexity. That is, how long will it take the algorithm to work. For ML techniques, no generic method exists to evaluate computational complexity. |
| V&V Technique | Software | AI/ML |
|---|---|---|
| Coverage Analysis: | Code Structure provides basis of coverage | No structure |
| Code Reviews: | Crowd source expert knowledge | No Code to Review |
| Version Control | Careful construction/release | Very Difficult with data |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Razdan, R.; Sell, R.; Akbas, M.I.; Menase, M. Perspectives on Safety for Autonomous Vehicles. Electronics 2025, 14, 4500. https://doi.org/10.3390/electronics14224500
Razdan R, Sell R, Akbas MI, Menase M. Perspectives on Safety for Autonomous Vehicles. Electronics. 2025; 14(22):4500. https://doi.org/10.3390/electronics14224500
Chicago/Turabian StyleRazdan, Rahul, Raivo Sell, M. Ilhan Akbas, and Mahesh Menase. 2025. "Perspectives on Safety for Autonomous Vehicles" Electronics 14, no. 22: 4500. https://doi.org/10.3390/electronics14224500
APA StyleRazdan, R., Sell, R., Akbas, M. I., & Menase, M. (2025). Perspectives on Safety for Autonomous Vehicles. Electronics, 14(22), 4500. https://doi.org/10.3390/electronics14224500

