Abstract
We introduce Lehmer Activation Units (LAUs), a class of aggregation-based neural activations derived from the Lehmer transform that unify feature weighting and nonlinearity within a single differentiable operator. Unlike conventional pointwise activations, LAUs operate on collections of features and adapt their aggregation behavior through learnable parameters, yielding intrinsically interpretable representations. We develop both real-valued and complex-valued formulations, with the complex extension enabling phase-sensitive interactions and enhanced expressive capacity. We establish a universal approximation theorem for LAU-based networks, providing formal guarantees of expressive completeness. Empirically, we show that LAUs enable highly compact architectures to achieve strong predictive performance under tightly controlled experimental settings, demonstrating that expressive power can be concentrated within individual neurons rather than architectural depth. These results position LAUs as a principled, interpretable, and efficient alternative to conventional activation functions.