mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

3K
active users

Finally completed the formalization of my Maclaurin-type inequality paper at github.com/teorth/symmetric_pr . Specifically, if sk denotes the kth symmetric mean of n real numbers, the inequality
|sl|1/lCmax((l/k)1/2|sk|1/k ,(l/(k+1))1/2|sk+1|1/(k+1))
is established for 1k<ln. In fact as a byproduct of the formalization I now get the explicit value C=160e7 for the constant C.

Was surprisingly satisfied to get the blue "No goals" message in the Lean infoview.

In the end it took perhaps a hundred hours of effort, spread out over a month. It was about 20x slower to formalize this paper than it was to write it in LaTeX. Initially this was due to my own inexperience with Lean, though near the end I was hitting the limitations of existing Lean tools, as well as the speed of the Lean compiler (though this was in part due some inefficiencies in my coding). However, Lean is a living language still in development, and several helpful people provided me during this project with upgraded Lean tactics (such as an improved `positivity` tactic that could handle finite sums and products, a `rify` tactic to easily convert natural number inequalities to real number ones, and a `rw_ineq` tactic to implement the rewriting of inequalities I had noted was previously missing). These definitely helped expedite the formalization process, particularly in the last week or so. 1/3

Happily, apart from the one bug previously noted, no other errors (other than very small typos) were detected in the paper. I found some mild simplifications to some steps in the proof as a consequence of formalization, but nothing particularly notable. (But this was a rather elementary paper, and I was not really expecting to glean deep mathematical insights from the formalization process.)

The Sapir-Whorf hypothesis en.wikipedia.org/wiki/Linguist is controversial for natural languages, but for the specific case of proof formalization languages I think the effect is real. The Lean syntax and library of lemmas and tools certainly pushed me to plan ahead and abstract away many small lemmas to a far greater extent than I would ordinarily do; long "stream of consciousness" type arguments, which I tend to favor personally, are possible in Lean, but often not the most efficient way to proceed. I have also started noticing myself mentally converting results I read in regular mathematical literature to a sort of "Lean pseudocode", though this is probably just a version of the "Tetris effect" en.wikipedia.org/wiki/Tetris_e and will likely dissipate now that my project has concluded.

A combination of automated tools, AI tools, and interactions with human Lean experts proved invaluable. GPT greatly sped up the initial learning of syntax, tools such as `exact?`, `apply?`, and Moogle moogle-morphlabs.vercel.app/ helped locate key lemmas, and Copilot had an often uncanny ability to autocomplete one or more lines of code at a time. But there was certainly a lot of "folklore" tips and tricks that I had to go to the Lean Zulip to get answers to. 2/3

en.wikipedia.orgLinguistic relativity - Wikipedia

@tao Dear Professor Tao, Thank you! Sorry for writing here, but I would like to clarify the steps
"We can thus take expectations and conclude that..."
and "...and hence by (27) we have...", just above Equation (44) in the logarithmically averaged Chowla paper.

Should we sum over
k1,...,kP
in the representation
1k1p1Fp1(x,y)+...+1kPpPFpP(x,y)
=1PHyZ/PHZ1k1p1Fp1(x,y)+...+1kPpPFpP(x,y)
+1k1p1Ep1(x,y)+...+1kPpPEpP(x,y),

where
Ep1(x,y),...,EpP(x,y)
are the errors?

Is Ep1(x,y)+...+EpP(x,y)=O(ϵ2HlogH)
sufficient?

@consensus This part of the argument does not require the fact that F has a representation F=pFp. Abstractly, it is asserting that if one has a bound of the form F1(X)=G1(X)+O(ε2Y) with probability 1o(1) for some functions F1,G1=O(Y) and some constant Y, then one automatically has EF1(X)=EG1(X)+O(ε2Y), by averaging the errors as you say (but the primes p play no role here, and I have no idea what you mean by kp).

consensus

@tao Thanks! Shouldn't we factor the logarithmic weight 1n as 1kpp to connect 1kppFp(x,y) to the original logarithmically averaged two-point correlation?

@consensus No. The connection to the original two-point correlation was already worked out in Section 2, where it was calculated that (2.4) implies (2.17). To complete the proof, it suffices to show that (2.17) leads to a contradiction; no further discussion of the original two-point correlation is needed.