mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.8K
active users

#wasserstein

0 posts0 participants0 posts today
JMLR<p>'Wasserstein Convergence Guarantees for a General Class of Score-Based Generative Models', by Xuefeng Gao, Hoang M. Nguyen, Lingjiong Zhu.</p><p><a href="http://jmlr.org/papers/v26/24-0902.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v26/24-0902.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/generative" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>generative</span></a> <a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/models" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>models</span></a></p>
JMLR<p>'Sliced-Wasserstein Distances and Flows on Cartan-Hadamard Manifolds', by Clément Bonet, Lucas Drumetz, Nicolas Courty.</p><p><a href="http://jmlr.org/papers/v26/24-0359.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v26/24-0359.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/manifolds" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>manifolds</span></a> <a href="https://sigmoid.social/tags/manifold" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>manifold</span></a> <a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a></p>
JMLR<p>'Correction to "Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations"', by Daniel Paulin, Peter A. Whalley.</p><p><a href="http://jmlr.org/papers/v25/24-0895.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/24-0895.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/ergodic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ergodic</span></a> <a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/approximations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>approximations</span></a></p>
JMLR<p>'Entropic Gromov-Wasserstein Distances: Stability and Algorithms', by Gabriel Rioux, Ziv Goldfeld, Kengo Kato.</p><p><a href="http://jmlr.org/papers/v25/24-0039.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/24-0039.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/regularization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>regularization</span></a> <a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/variational" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>variational</span></a></p>
JMLR<p>'Wasserstein Proximal Coordinate Gradient Algorithms', by Rentian Yao, Xiaohui Chen, Yun Yang.</p><p><a href="http://jmlr.org/papers/v25/23-0889.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/23-0889.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/optimization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>optimization</span></a> <a href="https://sigmoid.social/tags/gradient" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gradient</span></a></p>
JMLR<p>'Characterization of translation invariant MMD on Rd and connections with Wasserstein distances', by Thibault Modeste, Clément Dombry.</p><p><a href="http://jmlr.org/papers/v25/22-1338.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/22-1338.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/measures" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>measures</span></a> <a href="https://sigmoid.social/tags/mmds" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mmds</span></a></p>
JMLR<p>'Adjusted Wasserstein Distributionally Robust Estimator in Statistical Learning', by Yiling Xie, Xiaoming Huo.</p><p><a href="http://jmlr.org/papers/v25/23-0379.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/23-0379.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/estimators" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>estimators</span></a> <a href="https://sigmoid.social/tags/robust" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>robust</span></a></p>
JMLR<p>'Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization', by O. Deniz Akyildiz, Sotirios Sabanis.</p><p><a href="http://jmlr.org/papers/v25/21-1423.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/21-1423.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/nonasymptotic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nonasymptotic</span></a> <a href="https://sigmoid.social/tags/stochastic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>stochastic</span></a></p>
JMLR<p>'Tangential Wasserstein Projections', by Florian Gunsilius, Meng Hsuan Hsieh, Myung Jin Lee.</p><p><a href="http://jmlr.org/papers/v25/23-0708.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/23-0708.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/projections" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>projections</span></a> <a href="https://sigmoid.social/tags/causal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>causal</span></a></p>
Viktor Stein<p>My first preprint is online: <a href="https://arxiv.org/abs/2402.04613" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2402.04613</span><span class="invisible"></span></a> :)</p><p>We define and analyse Maximum Mean Discrepancy (MMD) regularized \(f\)-divergences \( D_{f, \nu}\) and their <a href="https://mathstodon.xyz/tags/Wasserstein" class="mention hashtag" rel="tag">#<span>Wasserstein</span></a> gradient flows. </p><p>We define the \(\lambda\)-regularized \(f\)-divergence for \(\lambda&gt;0\) as<br />\[D_{f, \nu}^\lambda(\mu) := \min_{\sigma\in M_+(\mathbb R^d)} D_{f, \nu}(\sigma) + \frac{1}{2 \lambda} d_K(\mu, \sigma)^2,\] <br />(yes, the min is attained!) where \( d_K \) is the kernel metric<br />\[ d_K(\mu, \nu) := \| m_{\mu - \nu} \|_{\mathcal H_K},\]<br />where \( (\mathcal H_K, \| \cdot \|_{\mathcal H_K}) \) is the Reproducing Kernel Hilbert Space for the kernel \( K \colon \mathbb R^d \times \mathbb R^d \to \mathbb R \) and <br />\[ m \colon M(\mathbb R^d) \to \mathcal H_K, \qquad \mu \mapsto \int_{\mathbb R^d} K(x, \cdot) \, \textrm{d}\mu(x)\]<br />is the kernel mean embedding (KME) of finite signed measures in the RKHS.<br />One can image the KME to be the generalization of the <a href="https://mathstodon.xyz/tags/KernelTrick" class="mention hashtag" rel="tag">#<span>KernelTrick</span></a> from points in \( \mathbb R^d \) to measures on \( \mathbb R^d \).</p><p>We then show that for any \( \nu \in M_+(\mathbb R^d) \) there exists a proper convex lower semicontinuous functional \( G_{f, \nu} \colon \mathcal H_K \to (- \infty, \infty] \) such that \[ D_{f, \nu}^{\lambda} = G_{f, \nu}^{\lambda} \circ m,\] where \(F^{\lambda}\) denotes the normal Hilbert space <a href="https://mathstodon.xyz/tags/MoreauEnvelope" class="mention hashtag" rel="tag">#<span>MoreauEnvelope</span></a> of \(F\).</p><p>We can now use standard Convex Analysis in Hilbert spaces to calculate the (\(\frac{1}{\lambda}\)-Lipschitz-continuous) gradient of \(D_{f, \nu}^{\lambda}\) and find the limits for \( \lambda \to \{ 0, \infty \}\) (pointwise and in the sense of Mosco), showing that \( D_{f, \nu}^{\lambda}\) interpolates between \( D_{f, \nu}\) and $d_K(\cdot, \nu)^2\).</p>
JMLR<p>'Fair Data Representation for Machine Learning at the Pareto Frontier', by Shizhou Xu, Thomas Strohmer.</p><p><a href="http://jmlr.org/papers/v24/22-0005.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v24/22-0005.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/supervised" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>supervised</span></a> <a href="https://sigmoid.social/tags/fairness" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>fairness</span></a></p>
JMLR<p>'A PDE approach for regret bounds under partial monitoring', by Erhan Bayraktar, Ibrahim Ekren, Xin Zhang.</p><p><a href="http://jmlr.org/papers/v24/22-1001.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v24/22-1001.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/forecaster" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>forecaster</span></a> <a href="https://sigmoid.social/tags/regret" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>regret</span></a></p>
Fabrizio Musacchio<p>Eliminating the middleman: You can apply the computation of the <a href="https://sigmoid.social/tags/Wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wasserstein</span></a> distance even more directly in <a href="https://sigmoid.social/tags/WassersteinGANs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WassersteinGANs</span></a> (<a href="https://sigmoid.social/tags/WGANs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WGANs</span></a>), eliminating the need for a discriminator. </p><p>🌎 <a href="https://www.fabriziomusacchio.com/blog/2023-07-30-wgan_with_direct_wasserstein_distance/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/blog/202</span><span class="invisible">3-07-30-wgan_with_direct_wasserstein_distance/</span></a></p><p><a href="https://sigmoid.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a></p>
Fabrizio Musacchio<p>The <a href="https://sigmoid.social/tags/Wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wasserstein</span></a> <a href="https://sigmoid.social/tags/metric" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metric</span></a> (<a href="https://sigmoid.social/tags/EMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EMD</span></a>) can be used, to train <a href="https://sigmoid.social/tags/GenerativeAdversarialNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAdversarialNetworks</span></a> (<a href="https://sigmoid.social/tags/GANs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GANs</span></a>) more effectively. This tutorial compares a default GAN with a <a href="https://sigmoid.social/tags/WassersteinGAN" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WassersteinGAN</span></a> (<a href="https://sigmoid.social/tags/WGAN" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WGAN</span></a>) trained on the <a href="https://sigmoid.social/tags/MNIST" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MNIST</span></a> dataset.</p><p>🌎 <a href="https://www.fabriziomusacchio.com/blog/2023-07-29-wgan/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/blog/202</span><span class="invisible">3-07-29-wgan/</span></a></p><p><a href="https://sigmoid.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a></p>
Fabrizio Musacchio<p>Apart from <a href="https://sigmoid.social/tags/Wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wasserstein</span></a> Distance (<a href="https://sigmoid.social/tags/EMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EMD</span></a>), other <a href="https://sigmoid.social/tags/metrics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metrics</span></a> also play an important role in <a href="https://sigmoid.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> tasks such as <a href="https://sigmoid.social/tags/clustering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>clustering</span></a>, <a href="https://sigmoid.social/tags/classification" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>classification</span></a>, and <a href="https://sigmoid.social/tags/InformationRetrieval" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>InformationRetrieval</span></a>. In this tutorial, you can find a discussion of five commonly used metrics: EMD, <a href="https://sigmoid.social/tags/KullbackLeiblerDivergence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KullbackLeiblerDivergence</span></a> (KL Divergence), <a href="https://sigmoid.social/tags/JensenShannonDivergence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>JensenShannonDivergence</span></a> (JS Divergence), <a href="https://sigmoid.social/tags/TotalVariationDistance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TotalVariationDistance</span></a> (TV Distance), and <a href="https://sigmoid.social/tags/BhattacharyyaDistance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BhattacharyyaDistance</span></a>. </p><p>🌎 <a href="https://www.fabriziomusacchio.com/blog/2023-07-28-probability_density_metrics/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/blog/202</span><span class="invisible">3-07-28-probability_density_metrics/</span></a></p>
Fabrizio Musacchio<p>The <a href="https://sigmoid.social/tags/Wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wasserstein</span></a> distance (<a href="https://sigmoid.social/tags/EMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EMD</span></a>), sliced Wasserstein distance (<a href="https://sigmoid.social/tags/SWD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SWD</span></a>), and the <a href="https://sigmoid.social/tags/L2norm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>L2norm</span></a> are common <a href="https://sigmoid.social/tags/metrics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metrics</span></a> used to quantify the ‘distance’ between two distributions. This tutorial compares these three metrics and discusses their advantages and disadvantages.</p><p>🌎 <a href="https://www.fabriziomusacchio.com/blog/2023-07-26-wasserstein_vs_l2_norm/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/blog/202</span><span class="invisible">3-07-26-wasserstein_vs_l2_norm/</span></a></p><p><a href="https://sigmoid.social/tags/OptimalTransport" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OptimalTransport</span></a> <a href="https://sigmoid.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a></p>
Fabrizio Musacchio<p>This tutorial takes a different approach to explain the <a href="https://sigmoid.social/tags/Wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wasserstein</span></a> distance (<a href="https://sigmoid.social/tags/EMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EMD</span></a>) by approximating the <a href="https://sigmoid.social/tags/EMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EMD</span></a> with cumulative distribution functions (<a href="https://sigmoid.social/tags/CDF" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CDF</span></a>), providing a more intuitive understanding of the metric. </p><p>🌎 <a href="https://www.fabriziomusacchio.com/blog/2023-07-24-wasserstein_distance_cdf_approximation/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/blog/202</span><span class="invisible">3-07-24-wasserstein_distance_cdf_approximation/</span></a></p><p><a href="https://sigmoid.social/tags/OptimalTransport" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OptimalTransport</span></a></p>
Fabrizio Musacchio<p>Calculating the <a href="https://sigmoid.social/tags/Wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wasserstein</span></a> distance (<a href="https://sigmoid.social/tags/EMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EMD</span></a>) 📈 can be computational costly when using <a href="https://sigmoid.social/tags/LinearProgramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearProgramming</span></a>. The <a href="https://sigmoid.social/tags/Sinkhorn" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Sinkhorn</span></a> algorithm provides a computationally efficient method for approximating the EMD, making it a practical choice for many applications, especially for large datasets 💫. Here is another tutorial, showing how to solve <a href="https://sigmoid.social/tags/OptimalTransport" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OptimalTransport</span></a> problem using the Sinkhorn algorithm in <a href="https://sigmoid.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> 🐍</p><p>🌎 <a href="https://www.fabriziomusacchio.com/blog/2023-07-23-wasserstein_distance_sinkhorn/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/blog/202</span><span class="invisible">3-07-23-wasserstein_distance_sinkhorn/</span></a></p>
Fabrizio Musacchio<p>The <a href="https://sigmoid.social/tags/Wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wasserstein</span></a> distance 📐, aka Earth Mover’s Distance (<a href="https://sigmoid.social/tags/EMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EMD</span></a>), provides a robust and insightful approach for comparing <a href="https://sigmoid.social/tags/ProbabilityDistributions" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ProbabilityDistributions</span></a> 📊. I’ve composed a <a href="https://sigmoid.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> tutorial 🐍 that explains the <a href="https://sigmoid.social/tags/OptimalTransport" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OptimalTransport</span></a> problem required to calculate EMD. It also shows how to solve the OT problem and calculate the EMD using the Python Optimal Transport (POT) library. Feel free to use and share it 🤗 </p><p>🌎 <a href="https://www.fabriziomusacchio.com/blog/2023-07-23-wasserstein_distance/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/blog/202</span><span class="invisible">3-07-23-wasserstein_distance/</span></a></p>
New Submissions to TMLR<p>Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses</p><p><a href="https://openreview.net/forum?id=aqqfB3p9ZA" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=aqqfB3</span><span class="invisible">p9ZA</span></a></p><p><a href="https://sigmoid.social/tags/sgd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>sgd</span></a> <a href="https://sigmoid.social/tags/wasserstein" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wasserstein</span></a> <a href="https://sigmoid.social/tags/generative" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>generative</span></a></p>