I got into the self-hosting scene this year when I wanted to start up my own website run on old recycled thinkpad. A lot of time was spent learning about ufw, reverse proxies, header security hardening, fail2ban.
Despite all that I still had a problem with bots knocking on my ports spamming my logs. I tried some hackery getting fail2ban to read caddy logs but that didnt work for me. I nearly considered giving up and going with cloudflare like half the internet does. But my stubbornness for open source self hosting and the recent cloudflare outages this year have encouraged trying alternatives.

Coinciding with that has been an increase in exposure to seeing this thing in the places I frequent like codeberg. This is Anubis, a proxy type firewall that forces the browser client to do a proof-of-work security check and some other nice clever things to stop bots from knocking. I got interested and started thinking about beefing up security.
I’m here to tell you to try it if you have a public facing site and want to break away from cloudflare It was VERY easy to install and configure with caddyfile on a debian distro with systemctl. In an hour its filtered multiple bots and so far it seems the knocks have slowed down.
My botspam woes have seemingly been seriously mitigated if not completely eradicated. I’m very happy with tonights little security upgrade project that took no more than an hour of my time to install and read through documentation. Current chain is caddy reverse proxy -> points to Anubis -> points to services
Good place to start for install is here
At the time of commenting, this post is 8h old. I read all the top comments, many of them critical of Anubis.
I run a small website and don’t have problems with bots. Of course I know what a DDOS is - maybe that’s the only use case where something like Anubis would help, instead of the strictly server-side solution I deploy?
I use CrowdSec (it seems to work with caddy btw). It took a little setting up, but it does the job.
Am I missing something here? Why wouldn’t that be enough? Why do I need to heckle my visitors?
Despite all that I still had a problem with bots knocking on my ports spamming my logs.
By the time Anubis gets to work, the knocking already happened so I don’t really understand this argument.
If the system is set up to reject a certain type of requests, these are microsecond transactions of no (DDOS exception) harm.
Something to understand is that computer nerds who frequent places like this or hacker fourms have predisposition to be loud mouth pessimist with an ego types. Any time theres a working but imperfect solution to a complex problem but its not mathematically perfect, you’ll then have someone loudly declare that that solution is pure hackery which makes no sense, write a two paragraph argument on halfassed assumptions that are often misrepresenting the problem out of ignorance or intention, link to a blogspam article written by a some hypernerd who they got their talking points from, then laud about wondering how their obviously better solution isnt clearer to the stupid ignorant cretins. Thats just how it is.
If crowdsec works for you thats great but also its a corporate product whos premium sub tier starts at 900$/month not exactly a pure self hosted solution.
I’m not a hypernerd, still figuring all this out among the myriad of possible solutions with different complexity and setup times. All the self hosters in my internet circle started adopting anubis so I wanted to try it. Anubis was relatively plug and play with prebuilt packages and great install guide documentation.
Allow me to expand on the problem I was having. It wasnt just that I was getting a knock or two, its that I was getting 40 knocks every few seconds scraping every page and searching for a bunch that didnt exist that would allow exploit points in unsecured production vps systems.
On a computational level the constant network activity of bytes from webpage, zip files and images downloaded from scrapers pollutes traffic. Anubis stops this by trapping them in a landing page that transmits very little information from the server side. By traping the bot in an Anubis page which spams that 40 times on a single open connection before it gives up, it reduces overall network activity/ data transfered which is often billed as a metered thing as well as the logs.
And this isnt all or nothing. You don’t have to pester all your visitors, only those with sketchy clients. Anubis uses a weighted priority which grades how legit a browser client is. Most regular connections get through without triggering, weird connections get various grades of checks by how sketchy they are. Some checks dont require proof of work or JavaScript.
On a psychological level it gives me a bit of relief knowing that the bots are getting properly sinkholed and I’m punishing/wasting the compute of some asshole trying to find exploits my system to expand their botnet. And a bit of pride knowing I did this myself on my own hardware without having to cop out to a corporate product.
Its nice that people of different skill levels and philosophies have options to work with. One tool can often complement another too. Anubis worked for what I wanted, filtering out bots from wasting network bandwith and giving me peace of mind where before I had no protection. All while not being noticeable for most people because I have the ability to configure it to not heckle every client every 5 minutes like some sites want to do.
Thanks for this! In going to set this up for myself.
I don’t mind Anubis but the challenge page shouldn’t really load an image. It’s wasting extra bandwidth for nothing.
Just parse the challenge and move on.
Afaik, you can set it up not to have any image, or have any other one.
“Anubis has risen, Wendell”
“Are you Jane’s addiction”?
I’ve repeatedly stated this before: Proof of Work bot-management is only Proof of Javascript bot-management. It is nothing to a headless browser to by-pass. Proof of JavaScript does work and will stop the vast majority of bot traffic. That’s how Anubis actually works. You don’t need to punish actual users by abusing their CPU. POW is a far higher cost on your actual users than the bots.
Last I checked Anubis has an JavaScript-less strategy called “Meta Refresh”. It first serves you a blank HTML page with a
<meta>tag instructing the browser to refresh and load the real page. I highly advise using the Meta Refresh strategy. It should be the default.I’m glad someone is finally making an open source and self hostable bot management solution. And I don’t give a shit about the cat-girls, nor should you. But Techaro admitted they had little idea what they were doing when they started and went for the “nuclear option”. Fuck Proof of Work. It was a Dead On Arrival idea decades ago. Techaro should strip it from Anubis.
I haven’t caught up with what’s new with Anubis, but if they want to get stricter bot-management, they should check for actual graphics acceleration.
Funnily enough, PoW was a hot topic in academia around the late 90s / early 2000, and it’s somewhat clear that the autor of Anubis has not read much about the discussion back then.
There was a paper called “Proof of work does not work” (or similar, can’t be bothered to look it up) that argued that PoW can not work for spam protection, because you have to support both low-powered consumer devices while blocking spammers with heavy hardware. And that is very valid concern. Then there was a paper arguing that PoW can still work, as long as you scale the difficulty in such a way that a legit user (e.g. only sending one email) has a low difficulty, while a spammer (sending thousands of emails) has a high difficulty.
The idea of blocking known bad actors actually is used in email quite a lot in forms of DNS block lists (DNSBLs) such as spamhaus (this has nothing to do with PoW, but such a distributed list could be used to determine PoW difficulty).
Anubis on the other hand does nothing like that and a bot developed to pass Anubis would do so trivially.
Sorry for long text.
At least in the beginning the scrapers just used curl with a different user agent. Forcing them to use a headless client is already a 100x increase in resources for them. That in itself is already a small victory and so far it is working beautifully.
Something that hasn’t been mentioned much in discussions about Anubis is that it has a graded tier system of how sketchy a client is and changing the kind of challenge based on a a weighted priority system.
The default bot policies it comes with has it so squeaky clean regular clients are passed through, then only slightly weighted clients/IPs get the metarefresh, then its when you get to moderate-suspicion level that JavaScript Proof of Work kicks. The bot policy and weight triggers for these levels, challenge action, and duration of clients validity are all configurable.
It seems to me that the sites who heavy hand the proof of work for every client with validity that only last every 5 minutes are the ones who are giving Anubis a bad wrap. The default bot policy settings Anubis comes with dont trigger PoW on the regular Firefox android clients ive tried including hardened ironfox. meanwhile other sites show the finger wag every connection no matter what.
Its understandable why some choose strict policies but they give the impression this is the only way it should be done which Is overkill. I’m glad theres config options to mitigate impact normal user experience.
Anubis is an elegant solution to the ai bot scraper issue, I just wish the solution to everything wasn’t just spending compute everywhere. In a world where we need to rethink our energy consumption and generation, even on clients, this is a stupid use of computing power.
I appreciate a simple piece of software that does exactly what it’s supposed to do.
The front page of the web site is excellent. It describes what it does, and it does its feature set in quick, simple terms.
I can’t tell you how many times I’ve gone to a website for some open-source software and had no idea what it was or how it was trying to do it. They often dive deep into the 300 different ways of installing it, tell you what the current version is and what features it has over the last version, but often they just assume you know the basics.
I have a script that watches apache or caddy logs for poison link hits and a set of bot user agents, adding IPs to an ipset blacklist, blocking with iptables. I should polish it up for others to try. My list of unique IPs is well over 10k in just a few days.
git repos seem to be real bait for these damn AI scrapers.
Hi, there are pre-made ipset lists also, ex: https://github.com/ktsaou/blocklist-ipsets
My phone hates anubis.
it’s mentioned in this article
I like the quirky SPH character
I don’t really understand what I am seeing here, so I have to ask – are these Security issues a concern?
https://github.com/TecharoHQ/anubis/security
I have a server running a few tiny web sites, so I am considering this, but I’m always concerned about the possibility that adding more things to it could make it less secure, versus more. Thanks for any thoughts.
Security issues are always a concern the question is how much. Looking at it they seem to at most be ways to circumvent the Anubis redirect system to get to your page using very specific exploits. These are marked as m low to moderate priority and I do not see anything that implies like system level access which is the big concern. Obviously do what you feel is best but IMO its not worth sweating about. Nice thing about open source projects is that anyone can look through and fix, if this gets more popular you can expect bug bounties and professional pen testing submissions.
Counterpoint: Anubis is not awesome: https://lock.cmpxchg8b.com/anubis.html
thank you! this needed said.
- This post is a bit critical of a small well-intentioned project, so I felt obliged to email the maintainer to discuss it before posting it online. I didn’t hear back.
i used to watch the dev on mastodon, they seemed pretty radicalized on killing AI, and anyone who uses it (kidding!!) i’m not even surprised you didn’t hear back
great take on the software, and as far as i can tell, playwright still works/completes the unit of work. at scale anubis still seems to work if you have popular content, but does hasnt stopped me using claude code + virtual browsers
im not actively testing it though. im probably very wrong about a few things, but i know anubis isn’t hindering my personal scraping, it does fuck up perplexity and chatgpt bots, which is fun to see.
good luck Blue team!
For clarity: I didn’t write the article, it’s just a good reference.
What use cases does perplexity do that Claude doesn’t for you?
I use it with OpenBSD’s relayd and I find it amazing how little maintenance it needs.
When I visit sites on my cellphone, Anubis often doesn’t let me through.
I’ve never had any issues on my phone using Fennec or Firefox. I don’t have many addons installed apart from uBlock Origin. I wouldn’t be surprised if some privacy addons cause issues with Anubis though.
Yeah, my setup is almost like yours; I’m also on Firefox with unlock and the only difference is that I’m also using Privacy Badger
It’s a great service. I hate the character.
You know the thing is that they know the character is a problem/annoyance, thats how they grease the wheel on selling subscription access to a commecial version with different branding.
https://anubis.techaro.lol/docs/admin/botstopper/
pricing from site
Commercial support and an unbranded version
If you want to use Anubis but organizational policies prevent you from using the branding that the open source project ships, we offer a commercial version of Anubis named BotStopper. BotStopper builds off of the open source core of Anubis and offers organizations more control over the branding, including but not limited to:
- Custom images for different states of the challenge process (in process, success, failure)
- Custom CSS and fonts
- Custom titles for the challenge and error pages
- “Anubis” replaced with “BotStopper” across the UI
- A private bug tracker for issues
In the near future this will expand to:
- A private challenge implementation that does advanced fingerprinting to check if the client is a genuine browser or not
- Advanced fingerprinting via Thoth-based advanced checks
In order to sign up for BotStopper, please do one of the following:
- Sign up on GitHub Sponsors at the $50 per month tier or higher
- Email sales@techaro.lol with your requirements for invoicing, please note that custom invoicing will cost more than using GitHub Sponsors for understandable overhead reasons
I have to respect the play tbh its clever. Absolutely the kind of greasy shit play that Julian from the trailer park boys would do if he were an open source developer.

I wish more projects did stuff like this.
It just feels silly and unprofessional while being seriously useful. Exactly my flavour of software, makes the web feel less corporate.










