Due to a curious theorem by König, we can't have a model of the reals of size alpeh_omega, but we could still make even larger models! https://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem_(set_theory)
While I could use some help to really get König's theorem, that's actually not where my greatest urge lies. What unsettles me the most, is that I have not found any account of those reals Cohen's method adds to these smaller models of the reals, or what's missing from them!
The problem seems to be that forcing is inherently nonconstructive. Let's amend that! First, some background. Cantor defined aleph null to be the smallest cardinal number, and aleph one to be the next one. None in between.
So the continuum hypothesis isn't about adding stuff between alpeh_0 and aleph1, it's about pushing the reals further back! Another notable thing is that representing reals (say, between 0 and 1) as infinite bitstrings simultaneously represents subsets of the natural numbers.
The next interesting big sign post is that these nonconstructive arguments are basically relative results! Consider this: there is a countable model for ZFC! That includes the infinite iteration of the powerset operation over a first infinite set!
So we will have to settle on a model for ZFC to be able to name the desired missing reals! And there are many! There's a topos where everything is countable. There's on e where everything is computable.
There's even one where everything is rational, and one where everything is finite! It seems reasonable to me that the range of things we can talk about should be limited by what's computable. So our smallest model of the reals should include computable reals, but no oracle computable ones!
Wait, what's that? Well, a computable real is one where there's an algorithm which outputs its digits in order. And a well-known theorem by Turing says there's no computer program that could tell us in full generality wether any given computer program will halt.
But since we're mathematicians, we might propose an oracle wich could tell us just that!
We could then incorporate that thing into a machine to obtain a computer which is more powerful than any conventional computer. And this phenomenon repeats. It could not decide wether programs for it would halt, we could propose a next level oracle, ...
But there's an unsettling aspect I haven't mentioned yet.
The representation of any of these new real numbers would still be a mere countably many binary digits. Even the uncomputable ones like Chaitin's constant! They do seem easy to grasp after all!
But are they? We'd always know that the next digit would be either zero or one. But we wouldn't know which one! That's a problem, because that means I cannot decide which of two numbers is bigger or smaller, ...
Addendum: As a preview of my planned possibly more extensive blog post about this, here is a small timeline of set theory up to Cohen's result. In case you get into troubles explaining to your relatives what keeps you up at night:
One of today's most fruitful proof methods for impossibility results is diagonalization. It goes back to Georg Cantor. He managed to prove things about unending objects, like the sequence of counting numbers.
Using a different method, he had noted that the power set operaration will always lead to bigger objects, even when applied to infinite sets! In the following, his diagonalization method allowed him to conclude that there are infinities beyond the cardinality of the counting numbers. At first, his contemporaries, colleagues, philosophers and theologists did not thank him for it. One reason might have been that Cantor expressed exhilaration to have found tangible properties of god himself!
His explorations culminated in his 1878 continuum hypothesis: the first and smallest infinity is just just the right size to contain the counting numbers. It asks wether the real numbers give the next infinity after that, or if there are sets of intermediate size in between.
Around 1870, Cantor and Richard Dedekind attempted to capture the notion of collections of things: what we'd call naive set theory. Their approach didn't avoid relf referential sets, which allows Bertrand Russell's paradox.
This was rectified by Abraham Fraenkel in a 1921 letter to Zermelo. A year later he, and independently Thoralf Skolem, further put Zermelo's idea of definiteness on more solid grounds. This was the birth of Zermelo-Fraenkel set theory (ZF).
Decades later, in 1940, Kurt Gödel proved that assuming the continuum hypothesis does not lead to a contradiction in ZFC set theory! He did this by constructing a model of set theory where the continuum hypothesis holds.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!