Optimal Transport

Wasserstein proximal gradient algorithm

The intricate task of transporting an initial distribution to a target distribution constitutes a pivotal challenge in optimization theory and machine learning practice, rooted deeply in the principles of optimal transport. Within the realm of machine learning, Wasserstein gradient flows emerge as indispensable continuous-time dynamics, seamlessly integrating into various machine learning frameworks. Our study centers on resolving a minimization problem defined over probability measures with finite second moments, leveraging a discrete-time algorithm comprising of a potential energy term and a convex nonsmooth term serving as a regularizer. We extend previous results by offering convergence analysis for both univariate and multivariate Gaussian distributions, while also introducing a mixed Gaussian optimal transport problem. The accompanying repository provides access to code and supplementary materials facilitating deeper exploration and understanding of our findings.

You can find a detailed report, results, and code here.