mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

3K
active users

In browsing the state of the art I came across two interesting things. The first is the software package lolremez (github.com/samhocevar/lolremez) that implements polynomial (and rational polynomial $f(x) / g(x)$) function approximation using the so-called Remez algorithm. After building it (github.com/samhocevar/lolremez) it seems to work rather well. The author also wrote a Wiki on the GitHub project with a bunch of tips for using the API to do special things like round the coefficients to easily-representable numbers, or enforce odd-function symmetry (github.com/samhocevar/lolremez) in the approximation.

GitHubGitHub - samhocevar/lolremez: 📈 Polynomial Approximations using the Remez Algorithm📈 Polynomial Approximations using the Remez Algorithm - samhocevar/lolremez

Lolremez outputs the approximated polynomial's coefficients in Horner (en.wikipedia.org/wiki/Horner%2) evaluation format. Horner's method is bad for me, or at least non-optimal, because it does not optimize for the number of multiplication operations, which is a bottleneck in FHE. Instead, there's a nice technique from the computational matrix algebra world called Paterson-Stockmeyer (en.wikipedia.org/wiki/Polynomi) that reduces the number of (non-scalar) multiplications from $O(n)$ to $O(\sqrt{n})$. This is what most FHE people use by default, as far as I can tell.

en.wikipedia.orgHorner's method - Wikipedia

So I wrote a little Python CLI to convert lolremez output to coefficient form. It uses `python-fire` and `sympy` to do the hard work.

The second interesting thing is how FHE researchers have adapted Remez to FHE specifically. This paper of Joon-Woo Lee, Eunsang Lee, Yongwoo Lee, Young-Sik Kim, and Jong-Seon No (eprint.iacr.org/2020/552) develops a "multi-interval" Remez method that jointly optimizes over multiple disjoint intervals of approximation. This is useful for approximating something like a sign function that has a discontinuity at zero (though you can do it in lolremez using odd-function symmetry tricks). The authors use the multi-interval method with $[-1, -\varepsilon] \cup [\varepsilon, 1]$ for small $\varepsilon > 0$ to approximate sign. There is an implementation (github.com/tuneinsight/lattigo) of this algorithm in the lattigo (github.com/tuneinsight/lattigo) library, which is also the project that tipped me off about Paterson-Stockmeyer. Kudos to Jean-Philippe Bossuat, its primary author, for making so much useful knowledge public and easy to find.

IACR Cryptology ePrint Archive · High-Precision Bootstrapping of RNS-CKKS Homomorphic Encryption Using Optimal Minimax Polynomial Approximation and Inverse Sine FunctionApproximate homomorphic encryption with the residue number system (RNS), called RNS-variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme, is a fully homomorphic encryption scheme that supports arithmetic operations for real or complex number data encrypted. Although the RNS-CKKS scheme is a fully homomorphic encryption scheme, most of the applications with the RNS-CKKS scheme use it as the only leveled homomorphic encryption scheme because of the lack of the practicality of the bootstrapping operation of the RNS-CKKS scheme. One of the crucial problems of the bootstrapping operation is its poor precision. While other basic homomorphic operations ensure sufficiently high precision for practical use, the bootstrapping operation only supports about 20-bit fixed-point precision at best, which is not high precision enough to be used for the reliable large-depth homomorphic computations until now. In this paper, we improve the message precision in the bootstrapping operation of the RNS-CKKS scheme. Since the homomorphic modular reduction process is one of the most important steps in determining the precision of the bootstrapping, we focus on the homomorphic modular reduction process. Firstly, we propose a fast algorithm of obtaining the optimal minimax approximate polynomial of modular reduction function and the scaled sine/cosine function over the union of the approximation regions, called an improved multi-interval Remez algorithm. In fact, this algorithm derives the optimal minimax approximate polynomial of any continuous functions over any union of the finite number of intervals. Next, we propose the composite function method using the inverse sine function to reduce the difference between the scaling factor used in the bootstrapping and the default scaling factor. With these methods, we reduce the approximation error in the bootstrapping of the RNS-CKKS scheme by 1/1176~1/42 (5.4~10.2-bit precision improvement) for each parameter setting. While the bootstrapping without the composite function method has 27.2~30.3-bit precision at maximum, the bootstrapping with the composite function method has 32.6~40.5-bit precision.