Generally, what Wikipedia has on all of these buzzwords above isn’t too shabby either. For building intuition it’s always great to have alternative viewpoints available.
And that's what I did for the rest, too. Suggestions, anyone?
I have Adams' book on his e-invariant lying around here somewhere, but haven't read it yet:
But I've read some on the nLab here:
I have no good book for de Rham cohomology, but if you have basic knowledge about homology (like, after reading just a little of Hatcher’s book), and get what differential forms are, say from watching parts 3,4,5 of Keenan Crane’s lectures on differential geometry here:
You might be able to cope with what Wikipedia has on de Rham cohomology.
It is a textbook, which means it’s deep enough to put some work into it. It also does a bit more than simplical homology. Just so you don’t feel overwhelmed, it’s okay to read only part of it, and it’s worth giving it multiple passes!
I also enjoyed John Milnor’s classic “Morse Theory” for it gave a different perspective on homology. You can get download it for free here:
Also a textbook, but it states the basic intuitions clearly.
Oh, right, References, always add some further reading:
I liked Allen Hatcher’s nice, free textbook “Algebraic Toplogy” which explains how to compute homology for simplical complexes, which are basically simplices of various dimension glued together. You can find it here:
That all this stuff is so interconnected and generalizable hints at something deep about space and mathematics. HoTT / homotopy type theory shows that homotopy relates to computation and the formation of mathematical structures in general in a deep way.
One can compute the cohomology of a cohomology, which yields number carpets. And this is a nice starting point for Adam’s spectral sequence, a method to compute homotopy groups with cohomology. Homotopy is that other, older method to study holes in manifolds: it gives groups which describe the ways in which one can wrap loops or spheres around holes of various dimension. Homology was invented because homotopy groups are so incredibly difficult to compute. They still are, we’re not there yet!
Extra: Homology is so cool! It comes in many forms and is related to other cool stuff. There’s de Rham cohomology based on differential forms, or Hodge theory for differential equations, one can go about with Morse theory, which computes homologies by looking at movies of slices of manifolds, and many more variants exist. Homology is basically a generalization of the Euler characteristic, and like it, relates to Betti numbers. And they have relations to all sorts of other stuff.
Well, actually, it’s a bit annoying that \(\delta\) returns lots of n-1 simplices for any n-simplex: it is a multifunction. If we instead map the other way around, from the boundary to the inside, we simply get a function! And that’s called cohomology, and is what the cool kids do all the time!
The fun thing with homology is that you can interpret \(\delta\) as a linear operator sending all points in the interior of a simplex to its boundary. You can then write down our fundamental insight as equation \(\delta \delta x = 0\) and then solve it! So there’s a direct way to get from our topological description to doing the same with linear spaces! Homology generalizes like crazy!
The fundamental insight here is that “the boundary of a boundary is empty”. For example, the boundary of an empty triangle (or any loop) is the empty set. That’s because the boundary of a line (1-simplex) consists of a disconnected pair of points (0-simplices), and by gluing a boundary on top of another boundary of matching dimension, we remove both boundaries. By forming a loop we end up with no boundary at all.
In homology, one defines a boundary operator \(\delta\) which associates to each n-dimensional shape a number of n-1 dimensional shapes. If I grab a filled-in triangle (a 2-simplex) then \(\delta\) will get me the complex consisting of its three edges (1-simplices) glued together to an empty triangle.
The following ultra-short intro to (co)homology is a reaction to this recent article which appeared on quantamagazine here:
Thanks to John Wehrle for sharing, and to Andreas Geisler for making me think about it:
“It has no generally accepted definition.” – also from the Wikipedia page on mathematics.
Which, I think, is very beautiful.
In contrast to these, experimental sciences are driven by models, with the intent to refute them. Yes, there are also experimental branches of the humanities.
Mathematics can attain irrefutable truth, a level of certainty ridiculous for other sciences. However, it is always only relative to assumptions (axioms). But that's okay, because absolute truth seems to be a mirage anyways. Apparently, the totality of truth cannot be defined mathematically. Hence neither can mathematics itself.
Theoretical sciences and hypothesis building are branches of applied mathematics. So it's the other way around!
Much of the humanities is mostly information collecting, similar to astronomy, but with some modelling.
Philosophy is literature: a form of art when being written, and belonging to information archaeology thereafter. In which modeling is mostly used to make catalogs.
> What is mathematics?
Good question! I have no idea!
I found this nice quote in Wikipedia's article on Mathematics:
“Illustrious scholars have debated this matter until they were blue in the face, and yet no consensus has been reached about whether mathematics is a natural science, a branch of the humanities, or an art form.” – from Tobies, Renate & Helmut Neunzert (2012). Iris Runge: A Life at the Crossroads of Mathematics, Science, and Industry, p. 9.
I have only polemical answers:
So, mathematics is just very weird regarding deduction, and simply unlike any of the other sciences.
However, there's another point where I would disagree with the author. It seems rather obvious that intuitionistic methods are also useful for proof theory, it even seems downright wrong to insist on the law of excluded middle in proof theory.
To which he prompted:
On top of that, we get proof assistants and other tools to advance mathematics itself. There are new foundations like category theory or model theory, which simply weren't conceivable without modern methods. Without category theory, for example, developing algebraic geometry (research about polynomials, which seems like a rather classical subject), to a level as we know today, seems completely hopeless.
The axiomatic core is indispensible, not just, as the article says, to eradicate mistakes, which were very common in past times, to make unlikely results at all accessible, and, basically, to free modelling from presumptions, which is what makes it so universally applicable. It is also highly practical as the foundation for modern programming languages and for verification of complex information processing systems.
> How about we call [the core of modern mathematics] "Impractical Math"?
Au contraire, it is not at all impractical, and it really is indispensible! Unfortunately, the article itself describes core mathematics as impractical, but I don't think that to be the case!
I like to learn and popularize higher maths. I'm not a pro, but I can stomach corrections!
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!