Abstract
Generative AI (GenAI) systems are increasingly deployed across high-impact sectors, introducing security risks that fundamentally differ from those of traditional software. Their probabilistic behavior, emergent failure modes, and expanded attack surface, particularly through retrieval and tool integration, complicate threat modeling and control assurance. This paper presents a threat-centric analysis that maps adversarial techniques to the core architectural layers of generative AI systems, including training pipelines, model behavior, retrieval mechanisms, orchestration, and runtime interaction. Using established taxonomies such as the OWASP LLM Top 10 and MITRE ATLAS alongside empirical research, we show that many GenAI security risks are structural rather than configurable, limiting the effectiveness of perimeter-based and policy-only controls. We additionally analyze the impact of regulatory divergence on GenAI security architecture and find that EU frameworks serve in practice as the highest common technical baseline for transatlantic deployments.