Permutation Learning with Only N Parameters: From SoftSort to Self-Organizing Gaussians
1 Visual Computing Group @ HTW Berlin, 2 Fraunhofer HHI Berlin
TL;DR
Abstract
Permutation learning is essential for organizing high-dimensional data in optimization and machine learning. Current methods like Gumbel-Sinkhorn require N^2 parameters for N objects, operating on the full permutation matrix. While low-rank approximations offer some reduction to 2NM (with M << N), they still create a computational bottleneck for very large datasets. SoftSort, a continuous relaxation of the argsort operator, enables differentiable 1D sorting but struggles with multidimensional data and complex permutations. We introduce a novel method for learning permutations using only N parameters, dramatically reducing storage costs. Our method extends SoftSort by iteratively shuffling the N indices of the elements to be sorted and applying a few SoftSort optimization steps per iteration. This significantly improves sorting quality, especially for multidimensional data and complex criteria, outperforming pure SoftSort. Our method offers superior memory efficiency and scalability while maintaining high-quality permutation learning. Its drastically reduced memory requirements make it ideal for large-scale optimization tasks like "Self-Organizing Gaussians", where efficient and scalable permutation learning is critical.
BibTeX
If you use our work in your research, please cite our publication:
@inproceedings{Barthel2025Permutation,
author = {Barthel, Kai Uwe and Barthel, Florian Tim and Eisert, Peter},
title = {Permutation Learning with Only N Parameters: From SoftSort to Self-Organizing Gaussians},
booktitle = {Proceedings of the 33rd European Signal Processing Conference (EUSIPCO)},
year = {2025},
pages = {1892--1896},
address = {Palermo, Italy},
month = {September},
publisher = {IEEE},
isbn = {978-9-46-459362-4},
url = {https://eusipco2025.org/wp-content/uploads/pdfs/0001892.pdf}
}