This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
Null Space Properties of Neural Networks with Applications to Image Steganography
by
Xiang Li
Xiang Li 1,*
and
Kevin M. Short
Kevin M. Short 2,*
1
Department of Mathematics, The Ohio State University, Columbus, OH 43210, USA
2
Integrated Applied Mathematics Program, Department of Mathematics and Statistics, University of New Hampshire, Durham, NH 03824, USA
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3394; https://doi.org/10.3390/math13213394 (registering DOI)
Submission received: 12 August 2025
/
Revised: 18 October 2025
/
Accepted: 20 October 2025
/
Published: 24 October 2025
Abstract
This paper advances beyond adversarial neural network methods by considering whether the underlying mathematics of neural networks contains inherent properties that can be exploited to fool neural networks. In broad terms, this paper will consider a neural network to be composed of a series of linear transformations between layers of the network, interspersed with nonlinear stages that serve to compress outliers. The input layer of the network is typically extremely high-dimensional, yet the final classification is in a space of a much lower dimension. This dimensional reduction leads to the existence of a null space, and this paper will explore how that can be exploited. Specifically, this paper explores the null space properties of neural networks by extending the null space definition from linear to nonlinear maps and discussing the presence of a null space in neural networks. The null space of a neural network characterizes the component of input data that makes no contribution to the final prediction so that we can exploit it to trick the neural network. One application described here leads to a method of image steganography. Through experiments on image data sets such as MNIST, it has been shown that the null space components can be used to force the neural network to choose a selected hidden image class, even though the overall image can be made to look like a completely different image. The paper concludes with comparisons between what a human viewer would see and the part of the image that the neural network is actually using to make predictions, hence showing that what the neural network “sees” is completely different than what we would expect.
Share and Cite
MDPI and ACS Style
Li, X.; Short, K.M.
Null Space Properties of Neural Networks with Applications to Image Steganography. Mathematics 2025, 13, 3394.
https://doi.org/10.3390/math13213394
AMA Style
Li X, Short KM.
Null Space Properties of Neural Networks with Applications to Image Steganography. Mathematics. 2025; 13(21):3394.
https://doi.org/10.3390/math13213394
Chicago/Turabian Style
Li, Xiang, and Kevin M. Short.
2025. "Null Space Properties of Neural Networks with Applications to Image Steganography" Mathematics 13, no. 21: 3394.
https://doi.org/10.3390/math13213394
APA Style
Li, X., & Short, K. M.
(2025). Null Space Properties of Neural Networks with Applications to Image Steganography. Mathematics, 13(21), 3394.
https://doi.org/10.3390/math13213394
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.