- Article
Novel Loss Functions for Improved Data Visualization in t-SNE
- Sara Nassar,
- Rachid Hedjam and
- Samir Brahim Belhaouari
A popular method for projecting high-dimensional data onto a lower-dimensional space while preserving the integrity of its structure is t-distributed Stochastic Neighbor Embedding (t-SNE). This technique minimizes the Kullback–Leibler () divergence to align the similarities between points in the original and reduced spaces. While t-SNE is highly effective, it prioritizes local neighborhood preservation, which results in limited separation between distant clusters and inadequate representation of global relationships. To improve these limitations, this work introduces two complementary approaches: (1) The Max-Flipped Divergence () modifies the original divergence by incorporating a contrastive term, , which enhances the ranking of point similarities through maximum similarity constraints. (2) The -Wasserstein Loss () combines the divergence with the classic Wasserstein distance, allowing the embedding to benefit from the smooth and geometry-aware transport properties of Wasserstein metrics. Experimental results show that these methods lead to improved separation and better structural clarity in the low-dimensional space compared to standard t-SNE.
18 February 2026





