My first preprint is online: https://arxiv.org/abs/2402.04613 :)
We define and analyse Maximum Mean Discrepancy (MMD) regularized -divergences and their #Wasserstein gradient flows.
We define the -regularized -divergence for as
(yes, the min is attained!) where is the kernel metric
where is the Reproducing Kernel Hilbert Space for the kernel and
is the kernel mean embedding (KME) of finite signed measures in the RKHS.
One can image the KME to be the generalization of the #KernelTrick from points in to measures on .
We then show that for any there exists a proper convex lower semicontinuous functional such that where denotes the normal Hilbert space #MoreauEnvelope of .
We can now use standard Convex Analysis in Hilbert spaces to calculate the (-Lipschitz-continuous) gradient of and find the limits for (pointwise and in the sense of Mosco), showing that interpolates between and $d_K(\cdot, \nu)^2\).