Had this thought the other day and tbh it’s horrifying to think about the implications of one, or God forbid all, of them going down.
Stackoverflow too but that only applies to nerds haha

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    19 days ago

    One of those is not a non-profit foundation, and that’s a Problem.

      • sqw@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        i was thinking about how much human effort has gone into making instructional videos on how to do things and how all that content exists almost solely in the hands of Alphabet Corp

  • Tedesche@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    1
    ·
    19 days ago

    I think it’s a bit ironic that Wikipedia hasn’t succumbed to the modern era of misinformation the way other information sources have, particularly given the warnings about it that have been given in the past. Not saying those warnings aren’t warranted, just that the way things have played out is counter to said expectations.

    • Mwa@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 days ago

      There is people who watch most popular articles,its not rlly misinformation.

  • Beacon@fedia.io
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    19 days ago

    Wikipedia essentially can’t be destroyed without a global catastrophe that would mean we have way worse problems. Wikipedia is downloadable. Meaning the ENTIRE Wikipedia. And so there are many copies of it stored all around the planet.

    If you have an extra 150 GB of space available then you can download a personal copy for yourself

    https://www.howtogeek.com/260023/how-to-download-wikipedia-for-offline-at-your-fingertips-reading/

  • The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    19 days ago

    Alexandria was important in its time, but in terms of the volume and quality of information we keep on Wikipedia alone, it is a mosquito in the Taj Mahal.

  • sit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    18
    ·
    18 days ago

    You can’t rely on YouTube videos staying up over time.

    Better download what you want might want to look up again

      • Ellen_musk_ox@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        18 days ago

        I think we also overestimate the valve if what would have been at Alexandria.

        Considering everything would have been hand copied/transcribed back then, and his expensive that would have been, the selection bias would be massive.

        I doubt it could compare to Wikipedia.

    • TriflingToad@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      19 days ago

      wikibooks is cool, had no idea that existed. I’m sure next time I get curious at 3am I’ll end up there reading about the history of ‘vectors’ or some other random stuff lol

  • TriflingToad@sh.itjust.worksOP
    link
    fedilink
    English
    arrow-up
    13
    ·
    19 days ago

    There was a video I saw (I think it was hank or John Green), where they talked about the implications of twitter being deleted during the start of Elon. They pulled out a joke book they bought of “1000 twitter posts” and said how it would be the only recorded proof they (personally) had of what twitter was.

    It’s terrifying thinking of just how much information is just being put in the hands of companies that don’t care or just on old hard drives about to give out due to funding. I wish there was a way to backup a random part of the information automatically, like a “I’ll give you a terabyte of backup, make the most of it” automatically choosing what isn’t backuped already.

    Also add reddit too, the amount of times I’ve searched a question and went through 2024 website crap then went back to the search and added “site:reddit” into DuckDuckGo and got an answer instantly.

    • 9point6@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      18 days ago

      The problem with YouTube is the sheer amount of storage required. Just going by the 10 Exabyte figure mentioned elsewhere in the thread, there are about 25,000 fediverse servers across all services in total IIRC, so even if you evenly split that 10EB across all of them, they would still need 400TB each just to cover what we have today.

      Famously YouTube needs a petabyte of fresh storage every day, so each of those servers would need to be able to accept an additional 40GB a day.

      Realistically though, any kind of decentralised archive wouldn’t start with 25,000 servers, so the operational needs are going to be significantly higher in reality

      • coronach@lemmy.sdf.org
        link
        fedilink
        arrow-up
        3
        ·
        18 days ago

        I know it’s totally subjective, but I wonder how much “non-trash” YouTube is uploaded each day?

      • SubArcticTundra@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        18 days ago

        Hmm good point. If this was too be anywhere near realistic, there would need to be a way to triage videos by whether they are actually worth archiving

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 days ago

    I wish that the Internet Archive would focus on allowing the public to store data. Distribute the network over the world.

    • csm10495@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 days ago

      In theory this could be true. In practice, data would be ripe for poisoning. It’s like the idea of turning every router into a last mile CDN with a 20TB hard drive.

      Then you have to think about security and not letting the data change from what was originally given. Idk. I’m sure something is possible, but without a real ‘omph’ nothing big happens.

        • csm10495@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 days ago

          Hashed by whom? Who has the source of truth for the hashes? How would you prevent it from being poisoned? … or are you saying a non-distributed (centralized) hash store?

          If centralized: you have a similar problem to IA today. If not centralized: How would you prevent poisoning? If enough distributed nodes say different things, the truth can be lost.

          • Possibly linux@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 days ago

            This is a topic that is pretty well tested. Basically the data is validated when received.

            For instance in IPFS data is tracked by its hash. You request something by a CID which is just a hash.

            There are other distributed networks and they all have there own ways of protecting against attacks. Usually an attack requires a huge amount of resources.

            • csm10495@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              17 days ago

              Even in ipfs, I don’t understand discoverability. Sort of sounds like it still needs a centralized list of metadata to content I’d, etc.

  • antonim@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    18 days ago

    If we’re going to stick to ancient Greek references, one of these is closer to the modern day Augean stables.