Follow

I've been wrestling with Cohen's forcing again. And I have questions. Here's what got me restarted into this topic, and much of the following is inspired by it: Scott Aaronson: The Complete Idiot’s Guide to the [...] Continuum Hypothesis: Part 1 https://scottaaronson.com/blog/?p=4974

Due to a curious theorem by König, we can't have a model of the reals of size alpeh_omega, but we could still make even larger models! https://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem_(set_theory)

While I could use some help to really get König's theorem, that's actually not where my greatest urge lies. What unsettles me the most, is that I have not found any account of those reals Cohen's method adds to these smaller models of the reals, or what's missing from them!

The problem seems to be that forcing is inherently nonconstructive. Let's amend that! First, some background. Cantor defined aleph null to be the smallest cardinal number, and aleph one to be the next one. None in between.

So the continuum hypothesis isn't about adding stuff between alpeh_0 and aleph1, it's about pushing the reals further back! Another notable thing is that representing reals (say, between 0 and 1) as infinite bitstrings simultaneously represents subsets of the natural numbers.

The next interesting big sign post is that these nonconstructive arguments are basically relative results! Consider this: there is a countable model for ZFC! That includes the infinite iteration of the powerset operation over a first infinite set!

So we will have to settle on a model for ZFC to be able to name the desired missing reals! And there are many! There's a topos where everything is countable. There's on e where everything is computable.

Wait, what's that? Well, a computable real is one where there's an algorithm which outputs its digits in order. And a well-known theorem by Turing says there's no computer program that could tell us in full generality wether any given computer program will halt.

https://en.wikipedia.org/wiki/Halting_problem

But since we're mathematicians, we might propose an oracle wich could tell us just that!

We could then incorporate that thing into a machine to obtain a computer which is more powerful than any conventional computer. And this phenomenon repeats. It could not decide wether programs for it would halt, we could propose a next level oracle, ...

https://en.wikipedia.org/wiki/Turing_jump

But there's an unsettling aspect I haven't mentioned yet.

The representation of any of these new real numbers would still be a mere countably many binary digits. Even the uncomputable ones like Chaitin's constant! They do seem easy to grasp after all!

https://en.wikipedia.org/wiki/Chaitin%27s_constant

But are they? We'd always know that the next digit would be either zero or one. But we wouldn't know which one! That's a problem, because that means I cannot decide which of two numbers is bigger or smaller, ...

Addendum: As a preview of my planned possibly more extensive blog post about this, here is a small timeline of set theory up to Cohen's result. In case you get into troubles explaining to your relatives what keeps you up at night:

One of today's most fruitful proof methods for impossibility results is diagonalization. It goes back to Georg Cantor. He managed to prove things about unending objects, like the sequence of counting numbers.

His explorations culminated in his 1878 continuum hypothesis: the first and smallest infinity is just just the right size to contain the counting numbers. It asks wether the real numbers give the next infinity after that, or if there are sets of intermediate size in between.

Around 1870, Cantor and Richard Dedekind attempted to capture the notion of collections of things: what we'd call naive set theory. Their approach didn't avoid relf referential sets, which allows Bertrand Russell's paradox.

This was rectified by Abraham Fraenkel in a 1921 letter to Zermelo. A year later he, and independently Thoralf Skolem, further put Zermelo's idea of definiteness on more solid grounds. This was the birth of Zermelo-Fraenkel set theory (ZF).

Decades later, in 1940, Kurt Gödel proved that assuming the continuum hypothesis does not lead to a contradiction in ZFC set theory! He did this by constructing a model of set theory where the continuum hypothesis holds.

Refurio Anachro@RefurioAnachro@mathstodon.xyzIn his intro to his (hopefully upcoming) introduction to forcing, Scott explains: We take the smallest possible model of the reals (Gödel's), which has size aleph one, and then extend that so it becomes larger and larger. This can eventually lead to ever larger models.