mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

3K
active users

Viktor Stein

My first preprint is online: arxiv.org/abs/2402.04613 :)

We define and analyse Maximum Mean Discrepancy (MMD) regularized f-divergences Df,ν and their gradient flows.

We define the λ-regularized f-divergence for λ>0 as
Df,νλ(μ):=minσM+(Rd)Df,ν(σ)+12λdK(μ,σ)2,
(yes, the min is attained!) where dK is the kernel metric
dK(μ,ν):=mμνHK,
where (HK,HK) is the Reproducing Kernel Hilbert Space for the kernel K:Rd×RdR and
m:M(Rd)HK,μRdK(x,)dμ(x)
is the kernel mean embedding (KME) of finite signed measures in the RKHS.
One can image the KME to be the generalization of the from points in Rd to measures on Rd.

We then show that for any νM+(Rd) there exists a proper convex lower semicontinuous functional Gf,ν:HK(,] such that Df,νλ=Gf,νλm, where Fλ denotes the normal Hilbert space of F.

We can now use standard Convex Analysis in Hilbert spaces to calculate the (1λ-Lipschitz-continuous) gradient of Df,νλ and find the limits for λ{0,} (pointwise and in the sense of Mosco), showing that Df,νλ interpolates between Df,ν and $d_K(\cdot, \nu)^2\).