Next Article in Journal
Feasibility Study of CUDA-Accelerated Homomorphic Encryption and Benchmarking on Consumer-Grade and Embedded GPUs
Previous Article in Journal
Exploring Public Health Perspectives on Travel Behavior Using a Machine Learning Approach: Thailand Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel VICReg for Self-Supervised Learning in Reproducing Kernel Hilbert Space

1
Vision and Image Processing Group, Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
2
Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Big Data Cogn. Comput. 2026, 10(3), 78; https://doi.org/10.3390/bdcc10030078
Submission received: 28 January 2026 / Revised: 26 February 2026 / Accepted: 3 March 2026 / Published: 5 March 2026
(This article belongs to the Section Artificial Intelligence and Multi-Agent Systems)

Abstract

Self-supervised learning (SSL) has emerged as a powerful paradigm for representation learning by optimizing geometric objectives, such as invariance to augmentations, variance preservation, and feature decorrelation, without requiring labels. However, most existing methods operate in Euclidean space, limiting their ability to capture nonlinear dependencies and geometric structures. In this work, we propose Kernel VICReg, a novel self-supervised learning framework that pulls the VICReg objective into a Reproducing Kernel Hilbert Space (RKHS). By kernelizing each term of the loss, variance, invariance, and covariance, we obtain a general formulation that operates on double-centered kernel matrices and Hilbert–Schmidt norms, enabling nonlinear feature learning without explicit mappings. We demonstrate that Kernel VICReg mitigates the risk of representational collapse under challenging conditions and improves performance on datasets exhibiting nonlinear structure or limited sample regimes. Empirical evaluations across MNIST, CIFAR-10, STL-10, TinyImageNet, and ImageNet100 show consistent gains over Euclidean VICReg, with particularly strong improvements on datasets where nonlinear structures are prominent. UMAP visualizations are provided only as a qualitative illustration of embedding geometry and are not used as a calibration or statistical validation. Our results suggest that kernelizing SSL objectives is a promising direction for bridging classical kernel methods with modern representation learning.

1. Introduction

Self-supervised learning (SSL) has emerged as a dominant paradigm for representation learning by leveraging the underlying structure of data without the need for human-annotated labels [1,2]. Methods such as SimCLR [1], BYOL [3], VICReg [2], and Barlow Twins [4] have demonstrated remarkable performance by enforcing objectives such as invariance to augmentations, feature decorrelation, and variance preservation. However, relying on Euclidean representations in standard self-supervised learning objectives often assumes a relatively simple geometric structure in the latent space. After multiple layers of nonlinear transformation, this assumption becomes questionable, since latent representations are likely to inhabit a highly non-linear manifold, poorly characterized by standard second-order statistics or 2 distances. This motivates our kernelized formulation, which enables learning in an implicitly defined high-dimensional feature space that captures the underlying manifold structure.
Several recent works have incorporated kernels into SSL objectives, e.g., [5,6,7,8,9]. These methods typically replace similarity metrics or introduce kernel-based dependence criteria within contrastive or predictive frameworks. In contrast, our approach performs a structural lifting of VICReg itself: the variance and covariance penalties are rederived from the covariance operator in RKHS, rather than modified heuristically. This distinction is important, as it preserves the collapse-prevention mechanism of VICReg while redefining its geometry.
Kernel methods, long celebrated for their ability to implicitly map data into high-dimensional feature spaces via the kernel trick [10], offer a compelling avenue to address this limitation. In supervised settings, the transformation from linear to nonlinear models via kernelization (exemplified by the transition from linear SVM to kernel SVM) has been foundational in classical machine learning. Inspired by this paradigm, we ask: can core SSL losses be systematically lifted into a Reproducing Kernel Hilbert Space? Here, we answer this by showing how one can replace Euclidean-space objectives with their RKHS counterparts, exemplified through a kernelized version of VICReg.
We propose Kernel-VICReg, which computes invariance, variance, and covariance entirely in RKHS via double-centered kernels and Hilbert–Schmidt norms, while we focus on VICReg as a concrete example (and note that Barlow Twins admits an analogous kernelization), the same RKHS-lifting could be extended to contrastive frameworks like SimCLR or predictive ones like BYOL with suitable cross-kernel formulations.
Experiments across multiple datasets show that Kernel-VICReg can yield more effective representations than its Euclidean counterpart under our evaluation protocol and can improve stability in settings where Euclidean VICReg collapses. These results suggest that integrating kernel methods into SSL is a promising direction.

1.1. Related Work: Kernel Methods in Self-Supervised Learning

While several recent SSL methods incorporate kernel functions, they fundamentally differ from Kernel VICReg in their application and scope. Existing approaches generally fall into the following categories:
  • Use of Nonlinear Dependence in SSL: Some methods such as [5,8] use Hilbert–Schmidt Independence Criterion (HSIC) [11] to model nonlinear dependence of samples or features in RKHS.
  • Kernels as Regularizers: Some methods, such as [9], integrate kernel tools, such as Maximum Mean Discrepancy (MMD) [12], into existing SSL losses to align feature distributions. In these cases, the kernel is used as a supplementary distance metric, but the underlying architecture remains Euclidean.
  • Implicit Kernel Analysis: Other works, such as [13], use kernels primarily as a theoretical lens to analyze the training dynamics or “spectral bias” of standard SSL objectives like SimCLR [1] or Barlow Twins [4].
In contrast, our proposed Kernel VICReg represents a systematic lifting of an entire loss function of an existing SSL method, i.e., VICReg, into the RKHS. Unlike methods that use kernels for specific terms, we kernelize the variance, invariance, and covariance terms simultaneously. This allows the model to operate on double-centered kernel matrices and Hilbert–Schmidt norms, capturing nonlinear dependencies across the entire loss function rather than just regularizing a single component. To the best of our knowledge, this is the first work to provide a complete kernelized derivation of the VICReg framework.

1.2. Positioning and Novelty

Recent works have explored the use of kernels in self-supervised learning, including kernel-based contrastive objectives and kernelized predictive frameworks. However, these methods typically replace similarity measures or dependence criteria within existing objectives (e.g., kernelized contrastive alignment or kernel dependence maximization), rather than systematically lifting the entire regularization structure of a non-contrastive SSL method into RKHS.
Our contribution is structurally different. We show that all three components of VICReg—invariance, variance preservation, and covariance decorrelation—admit principled RKHS counterparts derived from the covariance operator in Hilbert space. This yields a unified formulation in which:
1.
invariance is expressed via cross-kernel trace distances,
2.
variance preservation is tied to eigenvalues of the centered kernel matrix, and
3.
covariance decorrelation becomes a Hilbert–Schmidt norm penalty.
To our knowledge, a full operator-level lifting of VICReg into RKHS using covariance operators and double-centered Gram matrices has not been previously derived. Importantly, our formulation is not a kernel substitution in the similarity function but a redefinition of the geometry in which the SSL objective is defined.
The proposed Kernel VICReg framework provides a more robust geometric constraint in the RKHS, while standard Euclidean methods may suffer from dimensional collapse when the projection head is not sufficiently wide. Our kernelized approach enhances robustness to representation collapse by leveraging the infinite-dimensional nature of the Hilbert space to maintain feature variance.

2. Review of VICReg

2.1. Data and Network Settings

The neural network is composed of an encoder network and an expander network. Let X : = { x 1 , x 2 , , x n } be the dataset where x i R d and d is the dimensionality of data. Every batch of data, with size b, contains X b : = { x 1 , x 2 , , x b } and X b : = { x 1 , x 2 , , x b } where x i and x i correspond to each other; e.g., they are augmentations of a common underlying image. They pass through an encoder network to output the q-dimensional latent embeddings { y i } i = 1 b and { y i } i = 1 b where y i , y i R q . The latent embeddings pass through the expander network to obtain the p-dimensional output embeddings { z i } i = 1 b and { z i } i = 1 b where z i , z i R p where p > q .

2.2. VICReg Loss Function

VICReg [2] is a non-contrastive SSL method that simultaneously enforces three complementary properties on a pair of views x and x of the same sample:
1.
Invariance: embeddings of corresponding views should be close;
2.
Variance preservation: each dimension of the embedding space should maintain sufficient spread to avoid collapse;
3.
Covariance decorrelation: different embedding dimensions should be de-correlated with encourage richness.
The VICReg loss function contains three terms reflecting the three preceding properties:
The invariance loss is defined as:
L inv ( x , x ) = 1 b i = 1 b z i z i 2 2 ,
which encourages paired views to have similar representations.
The variance loss seeks to ensure that no dimension collapses to zero variance, and is defined as
L var ( x ) = 1 p j = 1 p γ σ j + 2 ,
for threshold γ > 0 and where [ . ] + = max ( . , 0 ) . The sample standard deviation is found as:
σ j = Var ( { z i , j } i = 1 b ) + ϵ ,
where parameter ϵ > 0 prevents degeneracy, and z i j denotes the j-th dimension of z i .
For the covariance loss, let
C = 1 / ( b 1 ) Z ˜ Z ˜ R p × p ,
be the empirical covariance of the zero-centered embeddings Z ˜ R b × p . VICReg penalizes off-diagonal correlations:
L cov ( x ) = 1 p j = 1 p k = 1 , k j p C j k 2 ,
where C j k denotes the ( j , k ) -th element of the matrix C . Promoting a diagonal covariance matrix encourages different features to capture distinct aspects of the data.
The Overall VICReg Loss then is a weighted summation of the preceding terms:
L VICReg = α L inv ( x , x ) + β L var ( x ) + L var ( x ) + ζ L cov ( x ) + L cov ( x ) ,
with hyperparameters α , β , ζ > 0 . VICReg thus avoids the need for negative samples or momentum encoders by combining these three regularizers in Euclidean space, laying the groundwork for our kernelized extension in the next section.

3. Kernel VICReg

Standard VICReg operates in Euclidean space by enforcing variance preservation, decorrelating feature dimensions, and ensuring consistency between views of the same sample. Our proposed Kernel VICReg extends these principles to the Reproducing Kernel Hilbert Space (RKHS), allowing for non-linear representations without explicit feature mappings. We rigorously derive the kernelized counterparts of VICReg’s variance, covariance, and invariance losses.

3.1. Covariance in RKHS

Before deriving the kernelized VICReg loss components, we establish the fundamental result that the covariance operator in RKHS is proportional to the double-centered kernel matrix. The formulations derived and explained in this section will be used in kernelizing the loss terms of VICReg.
Let ϕ ( x ) be the implicit feature mapping in RKHS H , where ϕ ( · ) is the pulling function into RKHS. The kernel function in the RKHS H is defined as:
k ( x i , x j ) = ϕ ( x i ) , ϕ ( x j ) H ,
where · , · H denotes the inner product in the RKHS H .
The covariance operator in RKHS is defined as [14,15]:
R t × t C ϕ ( x ) = 1 b i = 1 b ϕ ( x i ) ϕ ¯ ϕ ( x i ) ϕ ¯ ,
where t is the dimensionality of the RKHS, ϕ ( x i ) ϕ ¯ is the centered feature map of the batch, and ϕ ¯ = 1 b j = 1 b ϕ ( x j ) is the mean feature vector of the batch in RKHS. If we let
Φ : = [ ϕ ( x 1 ) , , ϕ ( x b ) ] R t × b ,
then Equation (8) can be restated in matrix form:
R t × t C ϕ ( x ) = 1 b ( Φ H ) ( Φ H ) ,
where H : = I b 1 b 1 b 1 b R b × b is the centering matrix.
Using the kernel trick, we define kernel matrix K R b × b whose ( i , j ) -th element is:
K ( x ) [ i , j ] = k ( x i , x j ) = ϕ ( x i ) , ϕ ( x j ) H .
From Equation (9), the kernel matrix can be stated as:
K ( x ) = Φ Φ .
The double-centered kernel matrix is:
K ^ ( x ) : = H K ( x ) H = ( 12 ) H Φ Φ H = ( s ) ( Φ H ) ( Φ H ) ,
where step ( s ) considers that the centering matrix H is symmetric. The double-centered kernel matrix will be used in the following.
The following explains the relation of the covariance operator and double-centered kernel in RKHS. The squared Hilbert–Schmidt norm of the covariance matrix in RKHS is:
C ϕ ( x ) HS 2 = tr ( C ϕ ( x ) C ϕ ( x ) ) = ( 10 ) 1 b 2 tr ( ( Φ H ) ( Φ H ) ( Φ H ) ( Φ H ) ) = ( a ) 1 b 2 tr ( ( Φ H ) ( Φ H ) ( Φ H ) ( Φ H ) ) = ( 13 ) 1 b 2 tr ( K ^ ( x ) K ^ ( x ) ) = 1 b 2 tr ( K ^ ( x ) 2 ) ,
where tr ( · ) denotes the trace of matrix and step ( a ) follows from the cyclic property of the trace operator. For easier computation in computer, it is possible to restate Equation (14) using the Frobenius norm:
C ϕ ( x ) HS 2 = 1 b 2 tr ( K ^ ( x ) 2 ) = 1 b 2 i j [ K ^ ( x ) ] i , j 2 = 1 b 2 K ^ ( x ) F 2 i = 1 b [ K ^ ( x ) ] i , i 2 .
From Equation (14), the covariance operator in RKHS is proportional to the centered kernel matrix:
C ϕ ( x ) 1 b K ^ ( x ) .
This key result enables the kernelization of VICReg’s variance and covariance regularization terms, as discussed next.

3.2. Kernelized Variance Regularization

The variance regularization term in VICReg prevents representation collapse by ensuring that the variance along each feature dimension remains sufficiently large, i.e., above a threshold γ . In RKHS, the variance of feature dimensions corresponds to the eigenvalues of C ϕ ( x ) . Since C ϕ ( x ) is proportional to K ^ ( x ) according to Equation (16), we define the kernelized variance loss as:
L var ( x ) = 1 b i = 1 b γ λ i b + ϵ + 2 ,
where [ · ] + : = max ( · , 0 ) is the standard Hinge loss, { λ i } i = 1 b are the eigenvalues of K ^ ( x ) , γ is a threshold for the minimum desired standard deviation, and ϵ is a small positive number preventing numerical instabilities. The proof of why λ i / b is understood as a variance in RKHS will be developed in Section 4.1.
It is noteworthy that computing the eigenvalues here is not concerning in terms of time of computation because the double-centered kernel is b × b where b is the batch size, which is usually not a very large number.

3.3. Kernelized Covariance Regularization

To prevent redundancy in representations, VICReg penalizes off-diagonal elements of the covariance matrix (see Equation (5)). Building on Equation (15), the kernelized covariance loss can be defined as
L cov ( x ) =   C ϕ ( x ) HS = 1 b K ^ ( x ) F 2 i = 1 b [ K ^ ( x ) ] i , i 2 .
Because of the direct relation between covariance and correlation, this regularization enforces decorrelation between features in RKHS.
The choice of using the Hilbert–Schmidt norm C ϕ ( x ) HS in Equation (18) rather than its square C ϕ ( x ) HS 2 (as initially suggested by the expansion in Equation (15)) is a deliberate design choice aimed at improving optimization stability. Mathematically, if λ i are the singular values of the covariance operator, the squared norm C ϕ ( x ) HS 2 = λ i 2 penalizes large correlations quadratically, which can lead to vanishing gradients for smaller correlation values during the late stages of training. By using the square root form (the norm itself), the gradient magnitude remains more consistent:
C ϕ ( x ) HS θ = 1 2 C ϕ ( x ) HS C ϕ ( x ) HS 2 θ .
Empirically, we observed that this formulation provides a more balanced optimization landscape, preventing the covariance term from being dominated by a few large off-diagonal correlations and ensuring that all dimensions of the RKHS embedding are decorrelated effectively. This is consistent with recent findings in SSL literature suggesting that normalization of loss terms can lead to smoother convergence and better numerical stability in high-dimensional feature spaces.

3.4. Kernelized Invariance Term

The invariance term of the loss function minimizes the mean squared error between corresponding samples x and x , i.e., different views of the same sample. Consider the following ( b × b ) kernel matrices:
K ( x , x ) [ i , j ] = k ( x i , x j ) = ϕ ( x i ) , ϕ ( x j ) H , K ( x , x ) [ i , j ] = k ( x i , x j ) = ϕ ( x i ) , ϕ ( x j ) H , K ( x , x ) [ i , j ] = k ( x i , x j ) = ϕ ( x i ) , ϕ ( x j ) H .
Given kernel matrices K ( x , x ) and K ( x , x ) for two augmented views and their cross-kernel matrix K ( x , x ) , the distance of the views in RKHS is defined as [16]:
L inv ( x , x ) = 1 b tr K ( x , x ) + K ( x , x ) 2 K ( x , x ) .
This loss term pushes the corresponding, i.e., augmented, instances toward each other in the RKHS and pulls away the non-corresponding instances away from each other in the RKHS. This enforces consistency across augmentations in RKHS.

3.5. Overall Kernel VICReg Loss

Combining all three regularization terms, the final Kernel VICReg loss is given by:
L Kernel - VICReg = α L inv ( x , x ) + β L var ( x ) + L var ( x ) + ζ L cov ( x ) + L cov ( x ) ,
where α , β , ζ are hyperparameters controlling the contributions of invariance, variance, and covariance. Note that the best values for hyperparameters α , β , ζ , γ , ϵ differ across the VICReg and Kernel VICReg. The best hyperparameters can be found depending on the dataset, as they are found in the original VICReg.
By reformulating VICReg in RKHS, our method enables self-supervised learning in high-dimensional implicit feature spaces without explicit feature extraction, making it a powerful framework for non-linear representation learning.

4. Discussions

4.1. Relation of Kernelized Variance Term with Kernel PCA

There is a close relation between the kernelized variance term in Kernel VICReg and kernel Principal Component Analysis (kernel PCA) [14]. In standard PCA, the eigenvalues of the covariance matrix C R p × p are calculated using the following eigenvalue problem:
C u i = η i u i ,
where η i R and u i R p are the i-th eigenvalue and eigenvector of the covariance matrix C , respectively.
In kernel PCA, the eigenvalue problem of the double-centered kernel matrix K ^ ( x ) R b × b is considered:
K ^ ( x ) v i = λ i v i ,
where λ i R and v i R b are the i-th eigenvalue and eigenvector of the double-centered kernel matrix K ^ ( x ) , respectively.
Each eigenvector v i gives a principal direction in the feature space ϕ ( U ) R t × b . According to the representation theory, any function in RKHS lies in the span of all points in the RKHS [17]:
ϕ ( U ) = i = 1 b α i ϕ ( x i ) = Φ A ,
where Φ R t × b is defined in Equation (9) and A : = [ α 1 , , α b ] R b × b is the matrix of coefficients in the linear combination.
On the one hand, the variance of the principal direction in the feature space is:
Var ( ϕ ( U ) ) = 1 b ( Φ H ) A 2 2 = 1 b tr ( ( Φ H ) A ) ( ( Φ H ) A ) = 1 b tr A Φ Φ A = ( 24 ) 1 b tr ( A ( Φ H ) ( Φ H ) K ^ ( x ) ( Φ H ) ( Φ H ) K ^ ( x ) A ) = 1 b tr A K ^ ( x ) 2 A .
For one of the coefficient vectors, this equation becomes:
Var ( ϕ ( u i ) ) = 1 b α i K ^ ( x ) 2 α i ,
where trace is dropped because the trace of a scalar is equal to itself.
Assume that the coefficient α i is the i-th eigenvector of the double-centered kernel matrix. Thus, the variance becomes:
Var ( ϕ ( u i ) ) = 1 b v i K ^ ( x ) 2 v i .
Squaring the double-centered kernel matrix in Equation (23) gives:
K ^ ( x ) 2 v i = λ i 2 v i .
Substituting Equation (27) in Equation (26) provides:
Var ( ϕ ( u i ) ) = 1 b α i λ i 2 α i = 1 b λ i 2 α i α i = 1 b λ i 2 α i 2 2 .
The coefficient is usually normalized to have λ i α i 2 2 = 1 (so that ϕ ( u i ) H = 1 ):
α i = v i λ i α i 2 2 = 1 λ i ,
where the eigenvector v i is assumed to be normalized to have unit length, i.e., v i 2 = 1 . According to Equation (29), Equation (28) becomes:
Var ( ϕ ( u i ) ) = λ i b .
This proves why λ i / b is used as variance in Equation (17).
On the other hand, left-multiplying Equation (23) by v i gives:
v i K ^ ( x ) v i = v i λ i v i = λ i v i v i = λ i v i 2 2 = ( a ) λ i ,
where ( a ) assumes that the eigenvector v i is normalized to have unit length. Therefore, if v i 2 = 1 , then:
λ i = v i K ^ ( x ) v i .
Putting Equation (31) in Equation (30) provides:
Var ( ϕ ( u i ) ) = 1 b v i K ^ ( x ) v i .
This analysis shows that the kernelized variance regularization in the proposed Kernel VICReg can be considered as kernel PCA. A similar analysis can be discussed to analyze the variance regularization term in VICReg as PCA.

4.2. Connection to HSIC and Independence

The squared Hilbert–Schmidt norm of the RKHS covariance operator (used in our covariance loss) is closely related to the Hilbert–Schmidt Independence Criterion (HSIC), a well-established kernel-based dependence measure. In this view, our covariance loss can be interpreted as minimizing feature dependence in RKHS, encouraging the learning of diverse and disentangled features. This theoretical grounding strengthens the regularization effect of the Kernel VICReg loss beyond simple decorrelation.

4.3. Kernel Choice as Inductive Bias

Different kernels induce different geometric priors. For example, the RBF kernel emphasizes local smoothness, the Laplacian kernel allows sharper decision boundaries, and the rational quadratic interpolates between them. Our experiments reveal that no single kernel is optimal across all datasets; instead, performance depends on the match between the dataset structure and the kernel-induced geometry. This makes Kernel VICReg not only robust but also adaptable to task-specific data distributions. A compatible extension to reduce sensitivity to a single kernel choice is to use kernel mixtures, e.g., k = m w m k m with w m 0 and m w m = 1 , or more broadly multiple kernel learning (MKL). This is orthogonal to our main contribution (lifting VICReg into RKHS), since the objective depends on Gram matrices and can directly operate on a mixture Gram matrix.

4.4. Comparison of Approaches in Kernel VICReg and Graph-Based Embedding

There exist recent works on kernel-graph integration and spectral clustering (e.g., [18,19]) analyzing how kernel methods interact with graph structure learning and spectral objectives. These approaches focus primarily on unsupervised clustering or graph-based embedding construction. In contrast, our work does not construct or learn a graph structure; instead, we lift a non-contrastive self-supervised regularization objective into RKHS. The role of kernels here is not spectral clustering, but redefining variance and covariance operators in an implicit feature space. Our formulation in Kernel VICReg is therefore complementary rather than competitive with kernel–graph hybrids.

4.5. Comparison of Optimization Objective in Kernel VICReg and Variational Inference

Although our formulation pulls the VICReg objective to an RKHS via kernel covariance operators, the resulting loss remains a deterministic, differentiable functional of the network parameters. Unlike variational self-supervised methods that introduce latent variables and evidence lower bounds (ELBO), our objective does not involve probabilistic latent modeling or variational inference.
Concretely, the kernelized variance and covariance terms are computed through empirical covariance operators constructed from mini-batch embeddings. These operators depend smoothly on the encoder parameters, and gradients are obtained via automatic differentiation.
Therefore, the optimization problem is a standard stochastic minimization of a deterministic loss (21) optimized using stochastic gradient descent. No variational bound, alternating optimization, or EM-style procedure is introduced by the RKHS lifting.

4.6. Theoretical Properties of Kernel VICReg

4.6.1. Non-Collapse in RKHS

Proposition 1
(Non-Collapse in RKHS). Let K ^ ( x ) denote the double-centered kernel matrix of a batch. If the kernelized variance regularization enforces:
λ i b γ > 0 , i { 1 , , b } ,
where { λ i } i = 1 b are eigenvalues of K ^ ( x ) , then the covariance operator C ϕ ( x ) in RKHS is strictly positive definite on the span of the batch, and representational collapse (i.e., rank-one embedding) is prevented.
Proof. 
From Equation (16), we have C ϕ ( x ) ( 1 / b ) K ^ ( x ) . If all eigenvalues satisfy λ i > 0 , then K ^ is full rank on the batch span. Thus the covariance operator has a strictly positive spectrum, implying that no direction in RKHS has zero variance. A collapsed representation corresponds to rank ( K ^ ) = 1 , which contradicts the enforced lower bound. □
Remark 1.
Proposition 1 demonstrates that Kernel VICReg enforces spectral spread in RKHS, while Euclidean VICReg only enforces coordinate-wise variance. This is a theoretical distinction between VICReg and Kernel KICReg.

4.6.2. Nonlinear Variance Capture in RKHS

Theorem 1
(Nonlinear Variance Capture in RKHS). Let M R p be a compact nonlinear manifold. Assume that M is not contained in any proper affine subspace of R p , but its nonlinear structure cannot be captured by second-order Euclidean statistics (i.e., PCA does not linearize M ).
Let k be a universal kernel (e.g., Gaussian RBF or Laplacian) with feature map ϕ : R p H . Then:
1.
The feature map ϕ is injective on M .
2.
The image ϕ ( M ) lies in a linear subspace of H whose covariance operator encodes nonlinear structure of M .
3.
The eigenvalues of the centered kernel matrix correspond to nonlinear principal components of M (kernel PCA).
4.
Therefore, enforcing lower bounds on eigenvalues of K ^ ( x ) preserves nonlinear modes of variation that are invisible to Euclidean covariance regularization.
Proof. 
See Appendix A for proof. □
Remark 2.
Theorem 1 connects Kernel VICReg to kernel PCA theory, as was also discussed in Section 4.1. This theorem does not claim that RKHS variance always strictly dominates Euclidean variance, but rather that for universal kernels, nonlinear structure becomes linearly representable in feature space, allowing spectral regularization to act on intrinsic manifold directions.

4.6.3. Spectral Stability in Small Batches

Theorem 2
(Spectral Stability of Centered Kernel Matrices). Let { x i } i = 1 b be i.i.d. samples drawn from a distribution D . Let k be a bounded kernel satisfying:
0 k ( x , x ) κ 2 , x .
Let K b denote the b × b Gram matrix, and let K ^ b = H K b H be its double-centered version, where H = I b ( 1 / b ) 1 1 . Let Σ : = E [ K ^ b ] denote the population centered Gram operator restricted to the sample span. Then, for any δ ( 0 , 1 ) , with high probability at least 1 δ , we have:
K ^ b Σ 2 c κ 2 log ( 2 b / δ ) b + log ( 2 b / δ ) b ,
for some universal constant c > 0 .
Proof. 
See Appendix B for proof. □
The bound (35) shows that eigenvalue estimates in RKHS concentrate at rate O ( 1 / b ) , providing stability guarantees for small batch regimes.
Corollary 1.
Under the above conditions, each eigenvalue λ i of K ^ b concentrates around its population counterpart at rate O ( 1 / b ) . Therefore, it provides stability guarantees for small-batch regimes where b is not too large.

4.7. Scalability and Large-Scale Approximations

A primary concern in kernel-based methods is the computational complexity associated with the Gram (kernel) matrix. In Kernel VICReg, the construction of the kernel matrix K R b × b and the eigenvalue decomposition required for the variance loss incur complexities of O ( b 2 ) and O ( b 3 ) , respectively, where b is the batch size, while modern hardware efficiently handles standard batch sizes (e.g., b = 256 to 2048), scaling to “cognitive computing” levels with massive datasets requires approximation strategies.

4.7.1. Scalability by Nyström Method

To address this, Kernel VICReg can be integrated with the Nyström method [20,21], which approximates the full kernel matrix using m landmark points ( m b ):
K ˜ : = K b , m K m , m 1 K m , b ,
where K b , m is the matrix of kernel evaluations between the batch samples and the landmarks. This reduces the complexity to O ( b m 2 ) .

4.7.2. Scalability by Random Fourier Features

Alternatively, Random Fourier Features (RFF) [22] can be employed for shift-invariant kernels (e.g., RBF kernel). RFF maps the embeddings z to a low-dimensional randomized feature space Φ ( z ) R D such that the kernel is approximated by a linear inner product:
k ( z i , z j ) Φ ( z i ) Φ ( z j ) , Φ ( z ) = 2 D cos ( ω 1 z + β 1 ) , , cos ( ω D z + β D ) ,
where { ω i } i = 1 D are sampled from the kernel’s spectral density. By utilizing RFF, the Kernel VICReg objective simplifies to a linear form in the Φ -space, achieving O ( b D ) complexity. This ensures linear scalability relative to the batch size, making the framework suitable for large-scale distributed applications.

4.8. Ethical Considerations and Potential Biases

While Kernel VICReg provides a robust framework for nonlinear representation learning, it is essential to consider the ethical implications regarding data bias. Kernel methods are inherently sensitive to the distribution of the training data. In domains such as medical imaging or cognitive computing, if a specific group is underrepresented, the kernel matrix K may fail to capture the local geometry of that sub-population, potentially leading to “feature exclusion”, where the RKHS mapping reinforces existing biases.
Furthermore, the choice of kernel (e.g., the RBF kernel width σ ) acts as a scale-dependent filter. If the data from minority groups exhibits different variance scales, a global kernel parameter might sub-optimally encode their features compared to the majority group. To mitigate these risks in sensitive applications, we suggest the use of adaptive kernels or group-fairness constraints within the variance-covariance terms, ensuring that the geometric embedding remains equitable across diverse demographic groups.

5. Experimental Results

We evaluate Kernel VICReg on a range of benchmark datasets to assess its ability to learn rich, non-linear representations in self-supervised settings. Our experiments span small-scale datasets (MNIST, CIFAR-10), mid-scale transfer learning (STL-10), and large-scale benchmarks (TinyImageNet, ImageNet100). For all experiments, we use a ResNet-18 backbone as the encoder, followed by a two-layer MLP projector. The model is trained using the Adam optimizer with an initial learning rate of 3 × 10 4 batch size of 256, and cosine learning rate scheduling. Each dataset is augmented following standard protocols: random cropping, horizontal flipping, and color jitter for natural image datasets.
To investigate the effect of different kernels in the Reproducing Kernel Hilbert Space (RKHS), we implement Kernel VICReg using four kernels: linear, radial basis function (RBF), Laplacian, and rational quadratic (RQ). The kernel matrices are computed batch-wise, with double-centering applied to ensure zero-mean embeddings in RKHS. We evaluate the learned representations using linear probing, where a logistic regression classifier is trained atop frozen embeddings, and transfer learning, where encoders pretrained on CIFAR-10 are evaluated on STL-10.

5.1. Comparison with Baselines

We compare Kernel VICReg against a diverse set of prominent self-supervised learning methods, including contrastive (SimCLR, MoCo), clustering-based (SwAV), and non-contrastive frameworks (BYOL, DINO, SimSiam, Barlow Twins, and VICReg). All baselines are implemented with the same ResNet-18 encoder to ensure fairness, and we reuse reported numbers or reproduce them when necessary using identical augmentation and optimization settings.
While the field of self-supervised learning (SSL) is rapidly evolving with newer non-contrastive architectures, the primary objective of this work is to provide a theoretical and methodological framework for lifting the VICReg objective into the Reproducing Kernel Hilbert Space (RKHS). Our evaluations focus on comparing Kernel VICReg against its direct Euclidean counterpart and established foundational SSL baselines (e.g., SimCLR, Barlow Twins, and VICReg) to isolate the impact of kernelization. By demonstrating consistent improvements over the original VICReg across multiple datasets, we establish the validity of the proposed kernelized variance-invariance-covariance constraints.
Table 1 summarizes the top-1 linear evaluation accuracy on ImageNet100 and TinyImageNet. While VICReg exhibits competitive performance on ImageNet100, it collapses on TinyImageNet due to its sensitivity to small datasets with high intra-class variance. In contrast, Kernel VICReg remains stable across all settings, with the Laplacian and RQ kernels achieving the best performance, demonstrating the robustness of RKHS-based regularization.
Table 2 presents results on MNIST and CIFAR-10. Kernel VICReg consistently outperforms its Euclidean counterpart, particularly on MNIST, where the Laplacian kernel reaches 98.50 % accuracy. On CIFAR-10, the RQ kernel achieves the best performance at 86.18 % , indicating that kernel choice adapts to data complexity.
Finally, Table 3 reports transfer learning results on STL-10 using encoders pretrained on CIFAR-10. Kernel VICReg transfers better than VICReg, highlighting its generalization capabilities in low-label regimes.
Note that while several variations and incremental improvements to the VICReg architecture have been proposed since its inception in 2022, this study focuses on the foundational task of extending the core VICReg objective into the RKHS. By comparing our method directly against the standard Euclidean VICReg in Table 2 and Table 3, we isolate the performance gains attributable to the kernelized covariance and variance constraints. This controlled comparison is essential for validating the theoretical derivation of Kernel VICReg.

5.2. Further Analysis and Insights

To better understand how kernelization affects the structure of learned representations, we visualize the embedding spaces on the MNIST dataset using UMAP for three models: original VICReg, Kernel VICReg with a linear kernel, and Kernel VICReg with a Laplacian kernel (see Figure 1). The UMAP plots reveal differences in cluster geometry across these methods. Representations from standard VICReg exhibit some class separation, but the clusters are elongated and lack compactness, suggesting anisotropic variance and potential feature collapse. The red cluster in VICReg is separated and is constructing two clusters; however, this is not the case for our method.
Kernel VICReg with a linear kernel improves upon this, producing tighter and more separated clusters, indicating that even without explicit nonlinearity in the kernel, RKHS-based decorrelation provides better structure. However, the most striking improvement appears with the Laplacian kernel: clusters become nearly circular and uniformly spaced, exhibiting strong isometry. This implies that the Laplacian-induced RKHS preserves pairwise relations and local structure more effectively, leading to embeddings with more consistent intra-class variance and improved inter-class margins.
These visualizations are qualitative; quantitative comparisons are provided by the linear-probe and transfer results in Table 1 and Table 3. As shown earlier in Table 1, Kernel VICReg maintains stable and competitive accuracy across both large-scale (ImageNet100) and small-scale (TinyImageNet) benchmarks. Notably, VICReg collapses on TinyImageNet, consistent with its known sensitivity to datasets exhibiting high intra-class variance or insufficient regularization. In contrast, kernelized versions (especially with Laplacian and rational quadratic kernels) perform robustly, demonstrating the benefits of nonlinear geometric alignment in RKHS.
We further analyze robustness sensitivity to kernel hyperparameters. Figure 2 shows that polynomial kernels can be highly sensitive to ( d , c 0 ) under distribution shifts, while Figure 3 demonstrates that RBF performance is non-monotonic in the bandwidth parameter γ .
Furthermore, the results in Table 3 highlight the generalization strength of kernel-based methods in transfer learning. Embeddings trained on CIFAR-10 and evaluated on STL-10 show that the RQ and Laplacian kernels outperform VICReg and even the linear kernel variant. These findings support the hypothesis that kernel-induced representations better capture underlying data manifolds, resulting in improved performance on downstream tasks with distributional shifts.
Overall, Kernel VICReg offers a principled extension of VICReg that gracefully incorporates nonlinearity through RKHS-based loss formulations. The improved cluster geometry, resilience to collapse, and higher transfer accuracy together suggest that kernelized self-supervision is a promising direction for representation learning beyond Euclidean limitations.

5.3. Implementation Details

We implemented Kernel VICReg in PyTorch 2.6.0 with a modular design that allows kernel choice and parameter tuning through command-line flags. The training pipeline consists of three components: (i) a backbone encoder (either ResNet-18 or a simple CNN), (ii) a multi-layer perceptron projector, and (iii) the proposed kernelized VICReg loss. Two augmented views are generated per sample following standard SSL protocols, and their embeddings are compared via the kernel-based losses.
Kernels and Centering. We implemented five kernels (linear, polynomial, radial basis function (RBF), Laplacian, and rational quadratic (RQ)) with automatic double-centering to operate in RKHS. For scale-sensitive kernels (RBF, Laplacian, and RQ), the bandwidth parameter γ is adaptively estimated using the median heuristic unless explicitly specified.
Kernel VICReg Loss. The loss consists of three terms: (i) invariance, computed as the trace distance between within-view and cross-view Gram matrices; (ii) variance, computed from eigenvalues of the double-centered kernel matrix, penalizing directions with variance below a threshold; and (iii) covariance, computed as the squared Hilbert–Schmidt norm of the covariance operator, equivalent to the sum of squared off-diagonal entries in the kernel covariance. The implementation returns all three contributions separately, allowing monitoring of invariance, variance, and covariance during training.
Training Setup. We conducted experiments on MNIST, CIFAR-10, STL-10, TinyImageNet, and ImageNet100. Unless otherwise noted, the encoder was a ResNet-18 backbone (with adjusted stem for CIFAR) followed by a 3-layer MLP projector with hidden dimension 1024. Models were trained using the Adam optimizer with a learning rate of 10 3 , a batch size of 512, and a cosine learning rate schedule. Augmentations followed standard SSL protocols: random crop, color jitter, blur, and horizontal flip for natural images; affine transforms for MNIST.
Hyperparameters. Table 4 reports the best-performing ( α , β , ζ ) found under our limited search for each dataset/kernel. We include these values to document the observed dataset dependence rather than to claim a single universally optimal setting. This dependence is expected because the kernel acts as an inductive bias that changes the geometry of the objective in RKHS.
Practical kernel-selection heuristics and reduced tuning. As a concise rule of thumb, Laplacian typically favors sharper/local structure (edge- and texture-dominated data), RBF favors smoother geometry and can be more forgiving under higher noise, and RQ is a practical middle ground when both local and global (multi-scale) structure matter. To reduce tuning burden in practice, one can fix α to a standard VICReg-scale value and perform small one-dimensional sweeps over β (to prevent spectral collapse relative to γ ) and ζ (to discourage redundancy), rather than jointly grid-searching all ( α , β , and ζ ) .
Evaluation. Representation quality was measured via linear probing: embeddings from the frozen encoder were extracted, and a linear classifier was trained with SGD for 100 epochs. In addition, we visualized embedding geometry using UMAP and tracked top eigenvalue dynamics of the centered kernel matrix across epochs to study variance C.

5.4. Computational Complexity and Empirical Overhead

While Kernel VICReg introduces nonlinear mapping via the RKHS, it is important to quantify the overhead relative to the standard Euclidean VICReg. Let b denote the batch size and p the dimensionality of the embeddings. Standard VICReg computes a covariance matrix in O ( b p 2 + p 3 ) , whereas Kernel VICReg computes the Gram matrix and its eigenvalue decomposition in O ( b 2 p + b 3 ) . Figure 4 and Figure 5 further quantify how kernel choice scales in latency and memory as embedding dimension and batch size increase.
The results indicate that for standard SSL batch sizes ( b 2048 ), the overhead is marginal. As discussed in Section 4.7, for larger “cognitive computing” scales where b increases significantly, the O ( b 3 ) bottleneck can be effectively mitigated using Nyström approximations or Random Fourier Features, which reduce the complexity back to a linear or quasi-linear relationship with batch size.

6. Conclusions

We introduced Kernel VICReg, a principled extension of VICReg that pulls self-supervised learning objectives from Euclidean space to Reproducing Kernel Hilbert Space (RKHS). By reformulating invariance, variance, and covariance terms in kernel space using double-centered kernel matrices and Hilbert–Schmidt norms, our method captures complex nonlinear structures without requiring explicit feature mappings.
Our empirical results across diverse datasets demonstrate the robustness and effectiveness of this kernelized formulation. Kernel VICReg outperforms its Euclidean counterpart, particularly in challenging regimes such as TinyImageNet, where standard VICReg collapses. Moreover, kernel-induced representations exhibit superior generalization in transfer learning tasks, as evidenced by improvements on STL-10. Visualization through UMAP further reveals that kernelization promotes more compact, isometric cluster structures, especially under the Laplacian kernel.
These findings suggest that kernel methods offer a natural and powerful means to enhance self-supervised learning while our work focuses on VICReg, the framework readily extends to other SSL objectives such as Barlow Twins and SimCLR, opening promising avenues for future research in kernelized SSL. This study contributes to bridging classical kernel theory and modern representation learning by showing that the integration of RKHS structure meaningfully improves both stability and expressiveness in self-supervised models.

Author Contributions

Conceptualization, M.H.S., B.G. and P.F.; methodology, M.H.S. and B.G.; software, M.H.S.; validation, M.H.S. and B.G.; formal analysis, M.H.S. and B.G.; investigation, M.H.S. and B.G.; writing—original draft preparation, M.H.S. and B.G.; writing—review and editing, M.H.S., B.G., S.M. and P.F.; visualization, M.H.S.; supervision, P.F.; project administration, S.M.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grants Program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in publicly accessible repositories: MNIST (http://yann.lecun.com/exdb/mnist/ (accessed on 1 August 2025)), CIFAR-10 (https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 1 August 2025)), STL-10 (https://cs.stanford.edu/~acoates/stl10/ (accessed on 1 August 2025)), TinyImageNet (https://www.kaggle.com/c/tiny-imagenet (accessed on 1 August 2025)), and ImageNet (https://www.image-net.org/ (accessed on 1 August 2025)). ImageNet100 is a subset of ImageNet constructed for evaluation in this study; no new data were created.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof for Theorem 1

Step 1: Injectivity of Universal Kernels.
A kernel k on a compact domain is universal if its RKHS is dense in C ( M ) with respect to the supremum norm. For universal kernels such as the Gaussian RBF and Laplacian kernels, the associated feature map ϕ is injective; that is:
x y ϕ ( x ) ϕ ( y ) .
Hence, the embedding ϕ : M H is one-to-one and preserves all information about the manifold.
Step 2: Linearization of Nonlinear Structure.
Although M may be nonlinear in R p , its image ϕ ( M ) lies in a (possibly infinite-dimensional) Hilbert space H , where linear combinations are permitted. Kernel PCA theory [15] shows that principal components in RKHS correspond to eigenfunctions of the covariance operator:
C ϕ ( x ) = 1 b i = 1 b ( ϕ ( x i ) ϕ ¯ ) ( ϕ ( x i ) ϕ ¯ ) .
The eigenvalue problem in H reduces to the finite-dimensional eigenvalue problem of the centered kernel matrix:
K ^ ( x ) v i = λ i v i .
Thus, eigenvalues { λ i } represent variances along nonlinear principal directions in feature space.
Step 3: Failure of Euclidean Covariance.
In Euclidean space, covariance captures only second-order linear variation:
C Euc = 1 b Z H Z .
If M exhibits curvature or nonlinear embedding (e.g., Swiss roll or concentric circles), Euclidean PCA cannot flatten the manifold; variance is spread across multiple coordinates without revealing intrinsic nonlinear directions. Thus, Euclidean variance regularization may fail to preserve intrinsic manifold modes.
Step 4: RKHS Variance Captures Nonlinear Modes.
In contrast, kernel PCA diagonalizes the covariance operator in H . Since universal kernels generate feature spaces dense in C ( M ) , nonlinear structure becomes linearly separable in H . Therefore, eigenvalues { λ i } of K ^ ( x ) quantify nonlinear principal variances of M . Enforcing λ i / b γ 2 > 0 ensures preservation of nonlinear modes of variation.
Conclusion.
Because RKHS covariance eigenvalues correspond to nonlinear principal components, kernelized variance regularization preserves nonlinear structure that Euclidean covariance cannot capture.

Appendix B. Proof for Theorem 2

Step 1: Decomposition of the Gram Matrix.
Define feature map ϕ ( x ) in RKHS H . The (uncentered) Gram matrix satisfies:
K b [ i , j ] = ϕ ( x i ) , ϕ ( x j ) .
Let Φ = [ ϕ ( x 1 ) , , ϕ ( x b ) ] . Then, we have K b = Φ Φ . The centered Gram matrix is:
K ^ b = H Φ Φ H = ( Φ H ) ( Φ H ) .
Thus, K ^ b is the finite-sample covariance operator expressed in kernel coordinates.
Step 2: Operator Representation.
Define centered feature vectors:
ϕ ˜ ( x i ) = ϕ ( x i ) μ , μ = E [ ϕ ( x ) ] .
The population covariance operator in RKHS is C ( x ) : = E [ ϕ ˜ ( x ) ϕ ˜ ( x ) ] . The empirical covariance operator C b ( x ) is ( 1 / b ) i = 1 b ϕ ˜ ( x i ) ϕ ˜ ( x i ) , as also stated in Equation (10). Moreover, according to the definition in the theorem, we have Σ : = E [ K ^ b ] . Thus:
K ^ b = b C b , Σ = b C .
Thus, the bounding K ^ b Σ 2 reduces to bounding operator deviation C b C 2 .
Step 3: Boundedness.
According to Equation (34), the kernel is bounded:
ϕ ( x ) H 2 = k ( x , x ) κ 2 ,
therefore,
ϕ ˜ ( x ) H 2 κ .
Hence, each rank-one operator,
X i : = ϕ ˜ ( x i ) ϕ ˜ ( x i ) ,
satisfies
X i 2 = ϕ ˜ ( x i ) 2 4 κ 2 .
Step 4: Apply Matrix Bernstein Inequality.
The operators:
Y i = X i E [ X i ] ,
are independent mean-zero self-adjoint operators. Matrix Bernstein inequality [24] states that for such operators:
Pr 1 b i = 1 b Y i 2 t 2 b exp b t 2 / 2 σ 2 + R t / 3 ,
where
R = max i Y i 2 8 κ 2 ,
and variance proxy
σ 2 1 b i = 1 b E [ Y i 2 2 ] 4 κ 4 .
Solving the inequality for probability δ gives:
C b C 2 c κ 2 log ( 2 b / δ ) b + log ( 2 b / δ ) b .
Step 5: Transfer to Kernel Matrix.
According to step 2 of the proof, the bounding K ^ b Σ 2 reduces to bounding operator deviation C b C 2 . Therefore, we obtain the same deviation bound for K ^ b Σ 2 .

References

  1. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2020; pp. 1597–1607. [Google Scholar]
  2. Bardes, A.; Ponce, J.; LeCun, Y. VICReg: Variance-invariance-covariance regularization for self-supervised learning. In Proceedings of the International Conference on Learning Representations, Virtual, 25–29 April 2022. [Google Scholar]
  3. Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.; Buchatskaya, E.; Doersch, C.; Avila Pires, B.; Guo, Z.; Gheshlaghi Azar, M.; et al. Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural Inf. Process. Syst. 2020, 33, 21271–21284. [Google Scholar]
  4. Zbontar, J.; Jing, L.; Misra, I.; LeCun, Y.; Deny, S. Barlow Twins: Self-supervised learning via redundancy reduction. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2021; pp. 12310–12320. [Google Scholar]
  5. Li, Y.; Pogodin, R.; Sutherland, D.J.; Gretton, A. Self-supervised learning with kernel dependence maximization. Adv. Neural Inf. Process. Syst. 2021, 34, 15543–15556. [Google Scholar]
  6. Wu, Y.; Greenspan, M. Pseudo-Keypoint RKHS Learning for Self-supervised 6DoF Pose Estimation. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2025; pp. 37–56. [Google Scholar]
  7. Ni, X.; Xiong, F.; Zheng, Y.; Wang, L. Graph Contrastive Learning with Kernel Dependence Maximization for Social Recommendation. In Proceedings of the ACM on Web Conference 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 481–492. [Google Scholar]
  8. Sepanj, M.H.; Ghojogh, B.; Fieguth, P. Self-supervised learning using nonlinear dependence. IEEE Access 2025, 13, 190582–190589. [Google Scholar] [CrossRef]
  9. Sepanj, H.; Fieguth, P. Aligning Feature Distributions in VICReg Using Maximum Mean Discrepancy for Enhanced Manifold Awareness in Self-Supervised Representation Learning. J. Comput. Vis. Imaging Syst. 2024, 10, 13–18. [Google Scholar]
  10. Schölkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  11. Gretton, A.; Bousquet, O.; Smola, A.; Schölkopf, B. Measuring statistical dependence with Hilbert-Schmidt norms. In Proceedings of the International Conference on Algorithmic Learning Theory; Springer: Berlin/Heidelberg, Germany, 2005; pp. 63–77. [Google Scholar]
  12. Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A. A kernel method for the two-sample-problem. Adv. Neural Inf. Process. Syst. 2006, 19, 513–520. [Google Scholar]
  13. Simon, J.B.; Knutins, M.; Ziyin, L.; Geisz, D.; Fetterman, A.J.; Albrecht, J. On the stepwise nature of self-supervised learning. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2023; pp. 31852–31876. [Google Scholar]
  14. Schölkopf, B.; Smola, A.; Müller, K.R. Kernel principal component analysis. In Proceedings of the International Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 1997; pp. 583–588. [Google Scholar]
  15. Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
  16. Schölkopf, B. The kernel trick for distances. Adv. Neural Inf. Process. Syst. 2000, 13. [Google Scholar]
  17. Mika, S.; Ratsch, G.; Weston, J.; Scholkopf, B.; Mullers, K.R. Fisher discriminant analysis with kernels. In Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No. 98th8468); IEEE: Piscataway, NJ, USA, 1999; pp. 41–48. [Google Scholar]
  18. Berahmand, K.; Saberi-Movahed, F.; Sheikhpour, R.; Li, Y.; Jalili, M. A comprehensive survey on spectral clustering with graph structure learning. arXiv 2025, arXiv:2501.13597. [Google Scholar] [CrossRef]
  19. Zhu, Y.; Xu, Y.; Liu, Q.; Wu, S. An empirical study of graph contrastive learning. arXiv 2021, arXiv:2109.01116. [Google Scholar] [CrossRef]
  20. Nyström, E.J. Über die praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben. Acta Math. 1930, 54, 185–204. [Google Scholar] [CrossRef]
  21. Williams, C.; Seeger, M. Using the Nyström method to speed up kernel machines. Adv. Neural Inf. Process. Syst. 2000, 13. [Google Scholar]
  22. Rahimi, A.; Recht, B. Random Features for Large-Scale Kernel Machines. Adv. Neural Inf. Process. Syst. 2007, 20. [Google Scholar]
  23. Liang, Z.; Luo, Y.; Beese, M.; Drexlin, D.J. Multiple Positive Views in Self-Supervised Learning. 2024. Available online: https://openreview.net/forum?id=WGP2pHtLtn (accessed on 2 March 2026).
  24. Tropp, J.A. User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 2012, 12, 389–434. [Google Scholar] [CrossRef]
Figure 1. UMAP projections (axes: UMAP-1 and UMAP-2) of MNIST embeddings from VICReg (left), Kernel VICReg with linear kernel (middle), and Kernel VICReg with Laplacian kernel (right). Colors denote digit classes (label indices 0–9). The Laplacian kernel yields rounder, more isometric clusters, indicating improved class separability.
Figure 1. UMAP projections (axes: UMAP-1 and UMAP-2) of MNIST embeddings from VICReg (left), Kernel VICReg with linear kernel (middle), and Kernel VICReg with Laplacian kernel (right). Colors denote digit classes (label indices 0–9). The Laplacian kernel yields rounder, more isometric clusters, indicating improved class separability.
Bdcc 10 00078 g001
Figure 2. Robustness of polynomial-kernel Kernel-VICReg across hyperparameter settings and distribution shifts on MNIST. Each grouped bar corresponds to one ( d , c 0 ) configuration, where d is polynomial degree and c 0 is the additive constant in k ( x , y ) = ( γ x y + c 0 ) d . Bars report linear-probe accuracy on clean data and under rotation and contrast shift. The spread across groups indicates strong hyperparameter sensitivity, with certain settings preserving clean/shifted performance.
Figure 2. Robustness of polynomial-kernel Kernel-VICReg across hyperparameter settings and distribution shifts on MNIST. Each grouped bar corresponds to one ( d , c 0 ) configuration, where d is polynomial degree and c 0 is the additive constant in k ( x , y ) = ( γ x y + c 0 ) d . Bars report linear-probe accuracy on clean data and under rotation and contrast shift. The spread across groups indicates strong hyperparameter sensitivity, with certain settings preserving clean/shifted performance.
Bdcc 10 00078 g002
Figure 3. Robustness sensitivity of RBF-kernel Kernel-VICReg to the bandwidth parameter γ under clean and shifted MNIST evaluation. Curves report linear-probe accuracy on clean data and under rotation and contrast shift. Performance is non-monotonic with respect to γ : moderate values yield better overall robustness, while overly large γ leads to pronounced degradation, demonstrating the need for careful kernel hyperparameter tuning.
Figure 3. Robustness sensitivity of RBF-kernel Kernel-VICReg to the bandwidth parameter γ under clean and shifted MNIST evaluation. Curves report linear-probe accuracy on clean data and under rotation and contrast shift. Performance is non-monotonic with respect to γ : moderate values yield better overall robustness, while overly large γ leads to pronounced degradation, demonstrating the need for careful kernel hyperparameter tuning.
Bdcc 10 00078 g003
Figure 4. Kernel latency scaling across embedding dimensions. Each curve corresponds to a kernel type (Linear, RBF, Polynomial) at a fixed batch size, and line style differentiates batch size while color differentiates kernel family. Latency increases with both embedding dimension and batch size, with RBF consistently incurring the highest compute cost and linear kernel remaining the fastest in most regimes. This figure quantifies the computational overhead of kernel choice under high-dimensional settings.
Figure 4. Kernel latency scaling across embedding dimensions. Each curve corresponds to a kernel type (Linear, RBF, Polynomial) at a fixed batch size, and line style differentiates batch size while color differentiates kernel family. Latency increases with both embedding dimension and batch size, with RBF consistently incurring the highest compute cost and linear kernel remaining the fastest in most regimes. This figure quantifies the computational overhead of kernel choice under high-dimensional settings.
Bdcc 10 00078 g004
Figure 5. Kernel memory scaling across embedding dimensions. Color encodes kernel type (Linear, RBF, Polynomial), and line style encodes batch size. Memory usage increases with embedding dimension and batch size for all kernels, while RBF shows the highest memory footprint at larger dimensions, indicating its greater resource demand in high-dimensional regimes.
Figure 5. Kernel memory scaling across embedding dimensions. Color encodes kernel type (Linear, RBF, Polynomial), and line style encodes batch size. Memory usage increases with embedding dimension and batch size for all kernels, while RBF shows the highest memory footprint at larger dimensions, indicating its greater resource demand in high-dimensional regimes.
Bdcc 10 00078 g005
Table 1. Linear evaluation on ImageNet100 and TinyImageNet with ResNet-18 backbone. The performances of baseline methods on TinyImageNet are adapted from [23].
Table 1. Linear evaluation on ImageNet100 and TinyImageNet with ResNet-18 backbone. The performances of baseline methods on TinyImageNet are adapted from [23].
ModelImageNet100TinyImageNet
SimCLR78.6437.83
SwAV74.3635.39
MoCo79.6241.23
BYOL80.8836.31
DINO75.4135.77
SimSiam78.8027.96
VICReg79.77Collapse
Barlow Twins80.63-
Kernel VICReg—Linear (ours)77.3437.48
Kernel VICReg—RBF (ours)78.1438.21
Kernel VICReg—Laplacian (ours)79.9240.12
Kernel VICReg—RQ (ours)79.8040.38
Table 2. Linear evaluation on MNIST and CIFAR-10 with ResNet-18 backbone.
Table 2. Linear evaluation on MNIST and CIFAR-10 with ResNet-18 backbone.
ModelMNISTCIFAR-10
VICReg97.1583.41
Kernel VICReg—Linear98.3386.08
Kernel VICReg—RBF92.6881.13
Kernel VICReg—Laplacian98.5084.56
Kernel VICReg - RQ97.4686.18
Table 3. Transfer learning accuracy on STL-10. Embeddings are trained on CIFAR-10 (ResNet-18).
Table 3. Transfer learning accuracy on STL-10. Embeddings are trained on CIFAR-10 (ResNet-18).
ModelSTL-10 Accuracy
VICReg69.82
Kernel VICReg—Linear71.38
Kernel VICReg—RBF67.21
Kernel VICReg—Laplacian71.09
Kernel VICReg—RQ72.34
Table 4. Best-performing ( α , β , ζ ) coefficients for Kernel VICReg across datasets.
Table 4. Best-performing ( α , β , ζ ) coefficients for Kernel VICReg across datasets.
KernelMNISTCIFAR-10STL-10TinyImageNet
α β ζ α β ζ α β ζ α β ζ
Linear0.5230.5120.5120.120.1
RBF0.522.50.5130.5120.120.2
Laplacian0.5230.2120.5120.120.1
RQ0.5231.9181.6170.220.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sepanj, M.H.; Ghojogh, B.; Moradi, S.; Fieguth, P. Kernel VICReg for Self-Supervised Learning in Reproducing Kernel Hilbert Space. Big Data Cogn. Comput. 2026, 10, 78. https://doi.org/10.3390/bdcc10030078

AMA Style

Sepanj MH, Ghojogh B, Moradi S, Fieguth P. Kernel VICReg for Self-Supervised Learning in Reproducing Kernel Hilbert Space. Big Data and Cognitive Computing. 2026; 10(3):78. https://doi.org/10.3390/bdcc10030078

Chicago/Turabian Style

Sepanj, M. Hadi, Benyamin Ghojogh, Saed Moradi, and Paul Fieguth. 2026. "Kernel VICReg for Self-Supervised Learning in Reproducing Kernel Hilbert Space" Big Data and Cognitive Computing 10, no. 3: 78. https://doi.org/10.3390/bdcc10030078

APA Style

Sepanj, M. H., Ghojogh, B., Moradi, S., & Fieguth, P. (2026). Kernel VICReg for Self-Supervised Learning in Reproducing Kernel Hilbert Space. Big Data and Cognitive Computing, 10(3), 78. https://doi.org/10.3390/bdcc10030078

Article Metrics

Back to TopTop