• 2 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle



  • I’m somewhat paranoid therefore running several isolated servers. And it’s still not bulletproof and will never be!

    • only the isolated server, ie. no internet access, can fetch data from the other servers but not vice versa.
    • SSH access key based only
    • Firewall dropping all but non-standard ports on dedicated subnets
    • Fail2ban drops after 2 attempts
    • Password length min 24 characters, 2FA, password rotation every 6 months
    • Guest network for friends, can’t access any internal subnet
    • Reverse proxy (https;443 port only)
    • Any service is accessed by a non-privileged user
    • Isolated docker services/databases and dedicated docker networks
    • every drive + system Luks-encrypted w/ passphrase only
    • Dedicated server for home automation only
    • Dedicated server for docker services and reverse proxy only
    • Isolated data/backup server sharing data to a tv box and audio system without network access via nfs
    • Offsite data/backup server via SSH tunnel hosted by a friend














  • I set up custom bash scripts collecting information (df, docker json, smartCTL etc) Either parse existing json info or assemble json strings and push it to Homeassistant REST api (cron) In Homeassistant data is turned into sensors and displayed. HA sends messages of sensors fail.
    Info served in HA:

    • HDD/SSD (size, smartCTL errors, spin up/down, temperature etc)
    • Availability/health of docker services
    • CPU usage/RAM/temperature
    • Network interface/throughput/speed/connections
    • fail2ban jails

    Trying to keep my servers as barebones as possible. Additional services/apps put strain on CPU/RAM etc. Found out most of data necessary for monitoring is either available (docker json, smartCTL json) or can be easily caught, e.g.

    df -Pht ext4 | tail -n +2 | awk '{ print $1}

    It was fun learning and defining what must be monitored or not, and building a custom interface in HA.


  • I’m using network overlays for individual containers and separation.
    Secondly fail2ban installed on host to secure docker services. Ban FORWARDING chains specific to docker instead of INPUT chains. [fail2ban docker](Configure Fail2Ban for a Docker Container – seifer.guru) Use 2FA for services if available.

    Rootless docker has limitations when it comes to port exposing, storage drivers, network overlays etc.

    The host is auto-updating security batches but rebooted manually only.
    Docker containers are updated manually too. I built all containers from file and don’t pull them because most are modified (plugins, minimizing sizes, dedicated user rights etc.)


  • Bind mounts are easy to maintain and backup. However if you share data amongst multiple container docker volumes are recommend especially for managing state.

    Backup volumes:

    docker run --rm --volumes-from dbstore -v $(pwd):/backup containername tar cvf /backup/backup.tar /dbdata

    • Launch a new container and mount the volume from the dbstore container
    • Mount a local host directory as /backup
    • Pass a command that tars the contents of the dbdata volume to a backup.tar file inside /backup directory.

    docker docs - volumes

    Database volume backup without stopping the service: bash into the container, dump it, and copy it out with docker cp. Run it periodically via crontab


  • Define which data is from value. I got 68TB of data but realistically only 3 TB are from such value I maintain several copies (Raspi + SSD) and online backup. The rest of data is stored on a cheap server built at a family member and synchronized twice a year. Make sure your systems and drives are all encrypted. And test your backups and redeployment strategy.

    Edited: typo