A friend is leaving Portland, and is letting me get some mileage out of his book collection until he arranges for them to join a library. He’ll likely outlive me so he imagines this will be his karma. I’ve already been enjoying earlier portions of his collection, including the extensive set of titles about or by Ludwig Wittgenstein, a corpus we’ve both studied.
I spent much of yesterday extracting two bookcases that had been squeezed between the wall and Carol’s bed, to give them more breathing room in what used to be the back office. Now I have the 2nd floor office to claim if H&R Block suggests filling out such a schedule: a home office may not double as sleeping quarters, a tough rule for folks in those tiny apartments.
Lots of these books are about maths, Such as Philosophical Introduction to Set Theory by Stephen Pollard, the 2015 Dover edition of the original 1990 Notre Dame Press copy. Back in the day, I’d debate math ideas with other math heads, such as Dr. Wayne Bishop at California University. The National Council of Teachers of Mathematics eradicated this archive however. The Math Forum, as we called it, started at Swarthmore, then moved to Drexel.
The book starts out drawing attention to the proliferation of mutually unintelligible subspecialties within maths, and the dangers this poses to a unificationist agenda. Pollard argues that only Set Theory has what it takes to keep Humpty Dumpty recognizably an egg (a whole), whereas Category Theory, an upstart grand unifier, hasn't yet proved itself more effective, in part because, as a relative new kid on the block, it's less established. But let's remember these words were penned in the 1980s sometime.
A core idea in Cantorian set theory is this idea that you cannot pair natural numbers with the reals because too many reals fall through the cracks as it were, per the "diagonal argument" and so on (pages 8-9).
The targeted reals are allowed to have infinitely many digits after the decimal point whereas that way is barred to the left side: natural numbers with infinity digits are not actually natural numbers, by definition.
Suppose one assigns a random number to any real number produced. Does the pairing process ever fail in that case? A rule to not break is that of uniqueness: one does not get to assign the same natural number to more than one real, for an operation to count as perpetual pairing.
Once a natural number is spoken for, it retires. So what’s the problem?
Is the pool of eligible brides thereby diminished such that I’ll run out of Ns for my grooms in R? If I need more digits (given how many Rs you give me) I’ll add them. I’ll never repeat. Pairing works no?
The reason you can’t pair N and R simply by removing the decimal point is precisely because members of N are not allowed to have infinitely many digits. We wouldn’t be able to sequence them all if they did, even if subsets could still be ordered.
Even if N were to include infinity-digit members, like the reals do, we could still use them for counting sheep, as the finite digit Ns would still be in the N set as well.
But then allowing infinity-digit Ns into contemporary maths would turn it into a wasteland (teenage or otherwise) set theoretically speaking, so lets us just keep things the way they are shall we?
The computer science mindset has made long digit sequences into strings, meaning we don't suffer headaches or vertigo in thinking how divergent (vs convergent) a number like ...3804951234... (random forever in both directions) must be. Digits are digits, with or without a decimal point. What impossibility are we talking about?
Random infinite strings of digits needn't "stand for" anything, e.g. we're not forced into picturing astronomically huge collections corresponding to the output of these noisy, chaotic, digit generators. Whereas these same random digits to the right of a decimal connotes ever more microscopic fine tuning and precision (convergence), which seems a lot more "believable" (the mental picture is more obvious).