• pachrist@lemmy.world
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    1
    ·
    11 months ago

    Just some advice to anyone who finds themselves in this specific situation, since I found myself in almost the exact same situation:

    If you really, really want to keep the data, and you can afford to spend the money (big if), move it to AWS. I had to move almost 4.5PB of data around Christmas of last year out of Google Drive. I spun up 60 EC2 instances, set up rclone on each one, and created a Google account for each instance. Google caps downloads per account to 10TB per day, but the EC2 instances I used were rate limited to 60MBps, so I didn’t bump the cap. I gave each EC2 instance a segment of the data, separating on file size. After transferring to AWS, verifying the data synced properly, and building a database to find files, I dropped it all to Glacier Deep Archive. I averaged just over 3.62GB/s for 14 days straight to move everything. Using a similar method, this poor guy’s data could be moved in a few hours, but it costs, a couple thousand dollars at least.

    Bad practice is bad practice, but you can get away with it for a while, just not forever. If you’re in this situation, because you made it, or because you’re cleaning up someone else’s mess, you’re going to have to spend money to fix it. If you’re not in this situation, be kind, but thank god you don’t have to deal with it.