Man, I know that feeling. One thing that helped me better deal with issues like this, was to have a changelog. Basically I write down what a setting was, what I changed it to and a reason. If something goes wrong, I can at least undo what changes I’ve made and see if it helps. It’s not perfect, but it might shave some hours off a RCA.













I feel like this wouldn’t reduce costs, since the load is the same, but just moved to a different daemon, in this case nginx. I for one, pay for bandwidth on my VPS, so the cost for me would be the same.
One thought I’ve had, is to use a slow loris technique combined with a small pool of connections and an ai poisoner, to keep the scraper occupied for as long as possible, without using a lot of bandwidth.