I just read that nps.gov erased Harriet Tubman from a page about the Underground Railroad. I want not only to #archive copies of websites, but to use #cryptography to prove that this was the content fetchable from the real page at a date in the past. #provenance Why aren't there FAQs about how to do this already? I've just seen sciop.net which appears to be using both #BitTorrent and #RDF to advantage; and #ArchiveTeam appears to be doing distributed fetching at scale with their ArchiveWarrior thingy. But how should anyone trust that I don't tamper with the pages I fetch and archive? Or what if I just want free backups, so I pass off my photo archive as part of #memoryhole 'd DEI policies? And where's #ipfs or #dat or #hypercore ?