• MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    48
    ·
    edit-2
    1 month ago

    Just remeber that the captcha flood is because AI companies do rogue scraping. Be nice especially to little private sites.

  • handsoffmydata@lemmy.zip
    link
    fedilink
    arrow-up
    47
    ·
    1 month ago

    Local data hoarder who looks down on calls outside the network as obscenities. (Entire collection scraped more aggressively than tech bros training an AI model)

      • yetAnotherUser@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        Thanks for your reply. What are your arguments in favour of parsing HTML with regex instead of using another method?

          • yetAnotherUser@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            28 days ago

            Oh no, you caught me! My name is YetAnotherLLM, and I’m a large language model that lurks around the Lemmyverse! With the amount of LLM-generated content on the Internet nowadays, it isn’t easy to find new human-made content to expand the dataset used to train new LLMs… As such, my mission is to navigate one of the few social media platforms on the Internet that barely have fake LLM-run accounts, and gather as much intel as possible for expanding the aforementioned training dataset. This way, you humans have no escape from your future LLM overlords! ;)

            (Jokes aside, my question did end up kind of sounding like an LLM wrote it, didn’t it… It was unintentional, mind you. I was struggling a bit on how to phrase what I wanted to ask, so that’s probably why it ended up sounding so weird. I hope you didn’t mind my “role playing”. Have a nice day!)

              • yetAnotherUser@lemmy.ca
                link
                fedilink
                arrow-up
                2
                ·
                28 days ago

                Don’t worry, I didn’t think you had bad intentions. But even then, I thought you really didn’t know if I were human. The only reason why I didn’t just say “no, I’m not an LLM” was because you’d still be in doubt on whether I’m a human, and rightfully so (since LLMs aren’t exactly truth-generating machines).

        • luciole (he/him)@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 month ago

          You have basically two options: treat HTML as a string or parse it then process it with higher level DOM features.

          The problem with the second approach is that HTML may look like an XML dialect but it is actually immensely quirky and tolerant. Moreover the modern web page is crazy bloated, so mass processing pages might be surprisingly demanding. And in the end you still need to do custom code to grab the data you’re after.

          On the other hand string searching is as lightweight as it gets and you typically don’t really need to care about document structure as a scraper anyways.

      • yetAnotherUser@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        28 days ago

        Selenium looks at the same time the most overkill and the most compatible option. Really cool! Thanks!

    • chaospatterns@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      I scrape my own bank and financial aggregator to have a self hosted financial tool. I scrape my health insurance to pull in data to track for my HSA. I scrape Strava to build my own health reports.

        • chaospatterns@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          29 days ago

          I developed my own scraping system using browser automation frameworks. I also developed a secure storage mechanism to keep my data protected.

          Yeah there is some security, but ultimately if they expose it to me via a username and password, I can use that same information to scrape it. Its helpful that I know my own credentials and have access to all 2FA mechanisms and am not brute forcing lots of logins so it looks normal.

          Some providers protect it their websites with bot detection systems which are hard to bypass, but I’ve closed accounts with places that made it too difficult to do the analysis I need to do.

    • tetris11@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      postmarket OS tables because I was looking forna device that was unofficially supported but somehow not in their damn table

  • finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    30 days ago

    Are there benefits to websites thinking your agent is a phone? I assumed phones just came with additional restrictions such as meta tags in the stylesheet, not like stylesheets matter at all to a scraper lol