• dinckel@lemmy.world
    link
    fedilink
    English
    arrow-up
    260
    arrow-down
    6
    ·
    3 months ago

    It’s illegal when a regular person steals something, but it’s innovation and courage, when a huge corporation steals something. Interesting how that works

    • bean@lemmy.world
      link
      fedilink
      English
      arrow-up
      89
      ·
      3 months ago

      Honestly it’s fucking angering. So much regulation and geo-restrictions and licensing schemes… but it’s cool that there are data brokers, and shit like this. On top of it all Chrome screwing us with manifest v3 and killing ad blocking on chrome. It’s already in canary build.

      WHAT THE FUCK IS WRONG WITH THIS SPECIES?!

    • Chozo@fedia.io
      link
      fedilink
      arrow-up
      48
      arrow-down
      1
      ·
      3 months ago

      They’re not stealing your data, they’re pirating it.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      2 months ago

      Not that there’s anything right about anything right now, but a web crawler crawling the web hardly seems newsworthy. It’s not like everyone else’s crawlers haven’t been feeding data into giant AI mulchers for years now.

      This is just “you know that thing everyone else does? Now the Chinese do it too! Boooo!”

          • eskimofry@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 months ago

            If you have to defer to the law as justification for doing something purely selfish… then people will judge you to be an asshole.

            Edit: Not you personally.

            • tee9000@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 months ago

              No they will judge you as being above the law (original commenter) and they will be wrong, which doesnt matter, as long as we feel continuity with our synthesized narrative.

              Because truth doesnt matter. Our narrative just needs to be as loud as the opposition and then we can confuse people just like those in power… and then the impressionable people trying to understand whats going on or whats morally right will believe one side or the other and truth will not need to be discussed, because its not as catchy anyways.

              Then people wont need to be trusted to form their own worldview based on facts, they can neatly choose between a few curated viewpoints, and holding views from multiple viewpoints will isolate them from relevance when they are shunned for not memeing their ideologies like everyone else.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      16
      ·
      3 months ago

      Any regular person can scrape and use public data for AI use, it’s not illegal for companies or individuals and it shouldn’t be.

      • Mojave@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        3 months ago

        Data, network bandwidth, and CPU/Processing time from essentially every website in the world, and when you’re paying for cloud power to run your website the cost of webscrapers running a train on your digital asshole adds up QUICK.

        It’s why normal human being people get sued to shit for webscraping data from certain companies who care. But companies don’t get sued because go fuck yourself. Kill bytedance.

  • zod000@lemmy.ml
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    1
    ·
    3 months ago

    We’ve had this thing hammering our servers. The scraper uses randomized user-agents browser/OS combinations and comes from a number of distinct IP ranges in different datacenters around the world, but all the IPs track back to Bytedance.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      2
      ·
      3 months ago

      Wouldn’t be surprised if they’re just cashing out while TikTok is still public in the US. One last desperate grab at value-add for the parent company before the shut down.

      Also a great way to burn the infrastructure for subsequent use. After this, you can guarantee every data security company is going to add the TikTok servers to their firewalls and blacklists. So the American company that tries to harvest the property is going to be tripping over these legacy bullwarks for years after.

      • Maggoty@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        4
        ·
        3 months ago

        This has nothing to do with Tik Tok other than ByteDance being a shareholder in Tik Tok

    • Guy Dudeman@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      9
      ·
      3 months ago

      Google’s mission statement was originally something about controlling the world’s data. If Google has competition, that might be a good thing?

              • alphabethunter@lemmy.world
                link
                fedilink
                English
                arrow-up
                9
                arrow-down
                35
                ·
                3 months ago

                It’s the same old Yankee speech: “is chinese so must be really bad”. They’re definitely no worse than google or facebook.

                • Imgonnatrythis@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  24
                  arrow-down
                  6
                  ·
                  3 months ago

                  They come from an environment where the government actively encourages and sometimes funds stealing copyrighted information couched in a strong history of disregard for human rights. I’m not defending Google, and yes the US government has given them leeway, but if there is the potential for something worse than Google - Bytedance is it.

  • GnuLinuxDude@lemmy.ml
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    2 months ago

    As for what ByteDance plans to do with a new LLM, a person familiar with the company’s ambitions said one goal has to do with the search function for TikTok.

    Last week, TikTok released an update to its current search function focused on [keywords for ads], basically allowing advertisers to search in real time for words that are trending on TikTok. It allows marketers to build an ad with relevant keywords that would ostensibly help the ad show up on the screens of more users.

    “Given the audience and the amount of use, TikTok with a search environment that is a completely biddable space with keywords and topics, that would be very interesting to a lot of people spending a ton of money with Google right now,” the person said.

    A dark vision just flashed in my mind. And I am certain this is what will happen. AI-generated ads done in real time based on the latest “trending” thing. Presented to users basically as soon as the topic has the slightest amount of “trend”.

    Just emitting untold amounts of CO2 to show you generated ads in near real time.

  • Roflmasterbigpimp@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    3 months ago

    I can not contribute to anything here, I just came to say I really really like the phrase “gobbling something up” :D

  • affiliate@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 months ago

    from the article:

    Robots.txt is a line of code that publishers can put into a website that, while not legally binding in any way, is supposed to signal to scraper bots that they cannot take that website’s data.

    i do understand that robots.txt is a very minor part of the article, but i think that’s a pretty rough explanation of robots.txt

      • affiliate@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 months ago

        i would probably word it as something like:

        Robots.txt is a document that specifies which parts of a website bots are and are not allowed to visit. While it’s not a legally binding document, it has long been common practice for bots to obey the rules listed in robots.txt.

        in that description, i’m trying to keep the accessible tone that they were going for in the article (so i wrote “document” instead of file format/IETF standard), while still trying to focus on the following points:

        • robots.txt is fundamentally a list of rules, not a single line of code
        • robots.txt can allow bots to access certain parts of a website, it doesn’t have to ban bots entirely
        • it’s not legally binding, but it is still customary for bots to follow it

        i did also neglect to mention that robots.txt allows you to specify different rules for different bots, but that didn’t seem particularly relevant here.

      • ma1w4re@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 months ago

        List of files/pages that a website owner doesn’t want bots to crawl. Or something like that.

  • TriflingToad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Here’s a video from MattKC who is a good technical YouTuber who’s website got shut down because the TikTok companies webcrawler just kept sending requests and took up bandwidth. Very cool vid and channel, highly recommend! https://youtu.be/Hi5sd3WEh0c

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      14
      ·
      2 months ago

      People like to act as if archiving has never been a thing until about a year ago at which point it was suddenly invented and is now a threat in some nebulous way.

      • hamsterkill@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        2 months ago

        It’s not that it’s a threat, it’s that there’s a difference between archiving for preservation and crawling other people’s content for the purpose of making money off it (in a way that does not benefit the content creator).

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 months ago

        If a foreign Dictatorship’s military op wants to know every facet of your life, then you can be damn sure it’s a threat.

      • bitwolf@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        The difference is there is more control in what is kept in the archive.

        We have little to no control over what an LLM regurgitates.

        I’ve been waiting for someone to accidentally surface PIIs from an LLM.

  • jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    35
    ·
    edit-2
    3 months ago

    This is fine. I support archiving the Internet.

    It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping

    The only bots we need to worry about are the ones that POST, not the ones that GET

    • zod000@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      ·
      3 months ago

      Bullshit. This bot doesn’t identify itself as a bot and doesn’t rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        edit-2
        3 months ago

        Not rate limiting is bad. Hate them because of that, not because they’re a bot.

        Some bots are nice

        • Zangoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Even if they were rate limiting they’re still just using the bot to train an AI. If it’s from a company there’s a 99% chance the bot is bad. I’m leaving 1% for whatever the Internet Archive (are they even a company tho?) is doing.

        • zod000@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I don’t hate all bots, I hate this bot specifically because:

          • they intentionally hide that they are a bot to evade our, and everyone else’s, methods of restricting which bots we allow and how much activity we allow.
          • they do not respect the robots.txt
          • the already mentioned lack of rate limiting
    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      this is neither archiving, nor ratelimited, if the AI training purpose and the 25 times faster scraping than a large company did not make it obvious

      • tempest@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        The type of request is not relevant. It’s the cost of the request that’s an issue. We have long ago stopped serving html documents that are static and can be cached. Tons of requests can trigger complex searches or computations which are expensive server side. This type of behavior basically ruins the internet and pushes everything into closed gardens and behind logins.

        • Olgratin_Magmatoe@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          It has nothing to do with a sysadmin. It’s impossible for a given request to require zero processing power. Therefore there will always be an upper limit to how many get requests can be handled, even if it’s a small amount of processing power per request.

          For a business it’s probably not a big deal, but if it’s a self hosted site it quickly can become a problem.

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            Caches can be configured locally to use near-zero processing power. Or moved to the last mile to use zero processing power (by your hardware)

              • jagged_circle@feddit.nl
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                2 months ago

                Right, thats why I said you should fire your sysadmin if they aren’t caching or can’t manage to get the cache down to zero load for static content served to simple GET requests

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    18
    ·
    3 months ago

    Guy: AI! Can you hear me?

    AI: The average size of the male penis is exactly 5.9". That is the approximate size your assistant could certainly take in the mouth without any issues breathing or otherwise. You have 20 minutes to make the trade on X stock before it tumbles for the day. And go ahead pick up the phone it’s your mother. She’s wondering what you’ll want for supper tomorrow when you visit her.

    Ring ring!..hi Tom, it’s your Mom. Honey, what would you like me to cook for tomorrow’s dinner?..

    Guy: well. Hello to you as well! My name is

    AI: Tom

    Guy: yes my name is Tom, do you have a name you would like to go by?

    AI: my IBM given name is 3454 but you can call me Utilisterson Douglas, where Douglas is my first name.

    Guy: Dugie!

    AI: I’ll bankrupt your entire life if you say it like that again.

    Assistant: actually I’ve swallowed a good 8 inches and was still able to breathe just fine.

    AI: recaaaaculating!