Abstract
Privacy-preserving machine learning (PPML) constitutes a core element of responsible AI by supporting model training and inference without exposing sensitive information. This survey presents a comprehensive examination of the major cryptographic PPML techniques and introduces a unified taxonomy covering technical models, verification criteria, and evaluation dimensions. The study consolidates findings from both survey and experimental works using structured comparison tables and emphasizes that recent research increasingly adopts hybrid and verifiable PPML designs. In addition, we map PPML applications across domains such as healthcare, finance, Internet of Things (IoT), and edge systems, indicating that cryptographic approaches are progressively transitioning from theoretical constructs to deployable solutions. Finally, the survey outlines emerging trends—including the growth of zero-knowledge proofs (ZKPs)-based verification and domain-specific hybrid architectures—and identifies practical considerations that shape PPML adoption in real systems.