Abstract
Designing provably safe controllers for high-dimensional nonlinear systems with formal guarantees represents a fundamental challenge in control theory. While control barrier functions (CBFs) provide safety certificates through forward invariance, manually crafting these barriers for complex systems becomes intractable. Neural network approximation offers expressiveness but traditionally lacks formal guarantees on approximation error and Lipschitz continuity essential for safety-critical applications. This work establishes rigorous theoretical foundations for learned barrier functions through explicit probabilistic bounds relating neural approximation error to safety failure probability. The framework integrates Lipschitz-constrained neural networks trained via PAC learning within multi-objective model predictive control. Three principal results emerge: a probabilistic forward invariance theorem establishing , explicitly connecting network parameters to failure probability; sample complexity analysis proving safe set expansion; and computational complexity bounds of enabling 50 Hz real-time control. An experimental validation across 648,000 time steps demonstrates a 99.8% success rate with zero violations, a measured approximation error of m, a matching theoretical bound of m, and a 16.2 ms average solution time. The framework achieves a 52% conservatism reduction compared to manual barriers and a 21% improvement in multi-objective Pareto hypervolume while maintaining formal safety guarantees.