I saw this post and I was curious what was out there.
https://neuromatch.social/@jonny/113444325077647843
Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?
I don’t self-host it, I just use archive.org. That makes it available to others too.
It’s a single point of failure though.
In that they’re a single organization, yes, but I’m a single person with significantly fewer resources. Non-availability is a significantly higher risk for things I host personally.
There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?
Yes. This isn’t something you want your own machines to be doing if something else is already doing it.
Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.