• palordrolap@kbin.social
    link
    fedilink
    arrow-up
    195
    ·
    9 months ago

    Put something in robots.txt that isn’t supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.

    Imperfect, but can’t think of a better solution.

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      106
      ·
      edit-2
      9 months ago

      Good old honeytrap. I’m not sure, but I think that it’s doable.

      Have a honeytrap page somewhere in your website. Make sure that legit users won’t access it. Disallow crawling the honeytrap page through robots.txt.

      Then if some crawler still accesses it, you could record+ban it as you said… or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.

      • CosmicTurtle@lemmy.world
        link
        fedilink
        English
        arrow-up
        52
        ·
        9 months ago

        I think I used to do something similar with email spam traps. Not sure if it’s still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.

        Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.

        I’d love to see something similar with robots.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          24
          ·
          edit-2
          9 months ago

          Yup, it’s the same approach as email spam traps. Except the naughty list, but… holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 months ago

        I’m the idiot human that digs through robots.txt and the site map to see things that aren’t normally accessible by an end user.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          For banning: I’m not sure but I don’t think so. It seems to me that prefetching behaviour is dictated by a page linking another, to avoid any issue all that the site owner needs to do is to not prefetch links for the honeytrap.

          For poisoning: I’m fairly certain that it doesn’t. At most you’d prefetch a page full of rubbish.

    • PM_Your_Nudes_Please@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      9 months ago

      Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.

      I’ve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        There are tools that just flag you as having gotten an achievement on Steam, you don’t even have to have the game open to do it. I’d hardly call that ‘hacking’.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      9 months ago

      Better yet, point the crawler to a massive text file of almost but not quite grammatically correct garbage to poison the model. Something it will recognize as language and internalize, but severely degrade the quality of its output.

    • Aatube@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      38
      ·
      9 months ago

      robots.txt is purely textual; you can’t run JavaScript or log anything. Plus, one who doesn’t intend to follow robots.txt wouldn’t query it.

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        46
        ·
        9 months ago

        If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:

        here-there-be-dragons.html
        

        Any client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.

      • ShitpostCentral@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        9 months ago

        You’re second point is a good one, but you absolutely can log the IP which requested robots.txt. That’s just a standard part of any http server ever, no JavaScript needed.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          9
          ·
          9 months ago

          You’d probably have to go out of your way to avoid logging this. I’ve always seen such logs enabled by default when setting up web servers.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 months ago

        People not intending to follow it is the real reason not to bother, but it’s trivial to track who downloaded the file and then hit something they were asked not to.

        Like, 10 minutes work to do right. You don’t need js to do it at all.

  • Cosmic Cleric@lemmy.world
    link
    fedilink
    English
    arrow-up
    132
    arrow-down
    3
    ·
    9 months ago

    As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.

    Honestly it seems like in all aspects of society the social contract is being ignored these days, that’s why things seem so much worse now.

    • TheObviousSolution@lemm.ee
      cake
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      9 months ago

      Governments could do something about it, if they weren’t overwhelmed by bullshit from bullshit generators instead and lead by people driven by their personal wealth.

    • PatMustard@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      9 months ago

      these days

      When, at any point in history, have people acknowledged that there was no social change or disruption and everyone was happy?

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    2
    ·
    9 months ago

    Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Y’know.

    • Ithi@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      Only if you’re already rich or in the right social circles though. Everyone else gets fined/jail time of course.

  • MonsiuerPatEBrown@reddthat.com
    link
    fedilink
    English
    arrow-up
    85
    arrow-down
    1
    ·
    edit-2
    9 months ago

    The open and free web is long dead.

    just thinking about robots.txt as a working solution to people that literally broker in people’s entire digital lives for hundreds of billions of dollars is so … quaint.

      • jkrtn@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Do-Not-Track, AKA, “I’ve made my browser fingerprint more unique for you, please sell my data”

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    edit-2
    9 months ago

    I would be shocked if any big corpo actually gave a shit about it, AI or no AI.

    if exists("/robots.txt"):
        no it fucking doesn't
    
    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      44
      ·
      9 months ago

      Robots.txt is in theory meant to be there so that web crawlers don’t waste their time traversing a website in an inefficient way. It’s there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.

    • DingoBilly@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Yeah I always found it surprising that everyone just agreed to follow a text file on a website on how to act. It’s one of the worst thought out/significant issues with browsing still out there from the beginning pretty much.

  • circuitfarmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    9 months ago

    Most every other social contract has been violated already. If they don’t ignore robots.txt, what is left to violate?? Hmm??

    • BlanketsWithSmallpox@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      1
      ·
      9 months ago

      It’s almost as if leaving things to social contracts vs regulating them is bad for the layperson… 🤔

      Nah fuck it. The market will regulate itself! Tax is theft and I don’t want that raise or I’ll get in a higher tax bracket and make less!

      • Jimmyeatsausage@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        9 months ago

        This can actually be an issue for poor people, not because of tax brackets but because of income-based assistance cutoffs. If $1/hr raise throws you above those cutoffs, that extra $160 could cost you $500 in food assistance, $5-$10/day for school lunch, or get you kicked out of government subsidied housing.

        Yet another form of persecution that the poor actually suffer and the rich pretend to.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 months ago

        And then the companies hit the “trust thermocline”, customers leave them in droves and companies wonder how this could’ve happened.

  • KillingTimeItself@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    2
    ·
    9 months ago

    hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        Well you can if you know the IPs that come in from but that’s of course the trick.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        9 months ago

        last i checked humans dont access every page on a website nearly simultaneously…

        And if you imitate a human then honestly who cares.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        9 months ago

        i mean yeah, but at a certain point you just have to accept that it’s going to be crawled. The obviously negligent ones are easy to block.

    • kingthrillgore@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 months ago

      There are more crawlers than I have fucks to give, you’ll be in a pissing match forever. robots.txt was supposed to be the norm to tell crawlers what they can and cannot access. Its not on you to block them. Its on them, and its sadly a legislative issues at this point.

      I wish it wasn’t, but legislative fixes are always the most robust and complied against.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        9 months ago

        yes but also there’s a point where it’s blatantly obvious. And i can’t imagine it’s hard to get rid of the obviously offending ones. Respectful crawlers are going to be imitating humans, so who cares, disrespectful crawlers will ddos your site, that can’t be that hard to implement.

        Though if we’re talking “hey please dont scrape this particular data” Yeah nobody was ever respecting that lol.

      • wise_pancake@lemmy.ca
        link
        fedilink
        English
        arrow-up
        48
        ·
        edit-2
        9 months ago

        robots.txt is a file available in a standard location on web servers (example.com/robots.txt) which set guidelines for how scrapers should behave.

        That can range from saying “don’t bother indexing the login page” to “Googlebot go away”.

        IT’s also in the first paragraph of the article.

      • mrnarwall@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        9 months ago

        Robots.txt is a file that is is accessible as part of an http request. It’s a backend configuration file that sets rules for what automatically running web crawlers are allowed. It can set both who is and who isn’t allowed. Google is usually the most widely allowed domain for bots just because their crawler is how they find websites for search results. But it’s basically the honor system. You could write a scraper today that goes to websites that it is being told it doesn’t have permission to view this page, ignore it, and still get the information

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          9 months ago

          I do not think it is even part of the HTTP protocol I think it’s just a pseudo add-on. It’s barely even a protocol it’s basically just a page that bots can look at with no really pre-agreed syntax.

          If you want to make a bot that doesn’t respect robots.txt you don’t even need to do anything complicated, you just need to not include the requirement to look at the page. It’s not enforceable at all.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    ·
    9 months ago

    I explicitly have my robots.txt set to block out AI crawlers, but I don’t know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i’ve been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.

    • phx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      2
      ·
      edit-2
      9 months ago

      The funny (in an “wtf” not “haha” sense) thing is, individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn’t, even after doing responsible disclosure.

      Meanwhile, companies completely ignore the standard mentions to say “you are not allowed to scape this data” and then use OUR content/data to build up THEIR datasets, including AI etc.

      That’s not a “violation of a social contract” in my book, that’s violating the terms of service for the site and essentially infringement on copyright etc.

      No consequences for them though. Shit is fucked.

  • lily33@lemm.ee
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    15
    ·
    9 months ago

    What social contract? When sites regularly have a robots.txt that says “only Google may crawl”, and are effectively helping enforce a monolopy, that’s not a social contract I’d ever agree to.

  • 𝐘Ⓞz҉@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    8
    ·
    9 months ago

    No laws to govern so they can do anything they want. Blame boomer politicians not the companies.

  • Ascend910@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 months ago

    This is a very interesting read. It is very rarely people on the internet agree to follow 1 thing without being forced

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      9 months ago

      Loads of crawlers don’t follow it, i’m not quite sure why AI companies not following it is anything special. Really it’s just to stop Google indexing random internal pages that mess with your SEO.

      It barely even works for all search providers.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        The Internet Archive does not make a useful villain and it doesn’t have money, anyway. There’s no reason to fight that battle and it’s harder to win.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    This is the best summary I could come up with:


    If you hosted your website on your computer, as many people did, or on hastily constructed server software run through your home internet connection, all it took was a few robots overzealously downloading your pages for things to break and the phone bill to spike.

    AI companies like OpenAI are crawling the web in order to train large language models that could once again fundamentally change the way we access and share information.

    In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internet’s most valuable commodities.

    You might build a totally innocent one to crawl around and make sure all your on-page links still lead to other live pages; you might send a much sketchier one around the web harvesting every email address or phone number you can find.

    The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI’s models “were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more.” A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.

    “We recognize that existing web publisher controls were developed before new AI and research use cases,” Google’s VP of trust Danielle Romain wrote last year.


    The original article contains 2,912 words, the summary contains 239 words. Saved 92%. I’m a bot and I’m open source!