I saw this post and I was curious what was out there.
https://neuromatch.social/@jonny/113444325077647843
Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?
Yes. This isn’t something you want your own machines to be doing if something else is already doing it.
Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.