For me it’s more like new interesting self hosted project and then find out it’s only distributed as a docker container without any proper packaging. As someone who runs FreeBSD, this is a frustration I’ve run into with quite a number of projects.
The reliance on it legitimately prevents the issues that it’s likely to cause. It’s made to be both idempotent and ephemeral.
Give an example of a Python project. You make a venv and do all your work in there. You then generate a requirements with all the versions pinned. You start build a container on a pinned version of alpine, or Ubuntu, or w/e. Wherever possible, you are pinning versions.
With best practices applied, result is that the image will be functionally the same regardless of what system builds it, though normally it gets built and stored on a registry like Docker Hub.
The only chance a user has to screw things up is in setting environment variables. That’s no different than ever before. At least now, they don’t have to worry about system or language-level dependencies introducing a breaking change.
If it’s an open-source project, usually the dockerfiles are available for reading.
Do you audit every line of code that you run in production? If you are trying some new python/django/sql app, are you reviewing all that?
I’d assume with a python based project, you’d be able to at least look at requirements and tell there’s something that sets off red flags. And you are either familiar/trust the maintainer, or you are reviewing the actual python itself?
Beyond that, the dockerfile is essentially just installation instructions for getting it running on a virgin system of X distribution. I wouldn’t call that a black box.
If the container isn’t part of an open source project, then this is a moot point then. The project itself is a black box.
You do you. Speaking for myself, I prefer to understand and be able to trivially inspect and modify the moving parts in the things I deploy so I have a snowball’s chance in hell of debugging and fixing things when something inevitably goes wrong.
Virtualization in general? Sure, I can. I’ve tried it a bit with bhyve. But it’s definitely a lot heavier since I’m now running a full Linux os and dedicating resources to it to run docker just to run a python or node app.
Learning the project is in Go though is a sigh of relief. Professionally I’ve moved to Go (from Python) just because it’s so damn easy to build and distribute.
I just wish there was better support for the other *nix’s. While the language support them just fine, docker on the other hand strangles it. =(
For me it’s more like new interesting self hosted project and then find out it’s only distributed as a docker container without any proper packaging. As someone who runs FreeBSD, this is a frustration I’ve run into with quite a number of projects.
Eh even as a Linux admin, I prefer hand installs I understand over mysterious docker black boxes that ship god knows what.
Sure, if I’m trialing something to see if it’s worth my time, I’ll spin up a container. But once it’s time to actually deploy it, I do it by hand.
yes very much agreed on this. docker is awesome but imo the reliance on it will absolutely cause issues down the line
Sorry but IMO that’s FUD.
The reliance on it legitimately prevents the issues that it’s likely to cause. It’s made to be both idempotent and ephemeral.
Give an example of a Python project. You make a venv and do all your work in there. You then generate a requirements with all the versions pinned. You start build a container on a pinned version of alpine, or Ubuntu, or w/e. Wherever possible, you are pinning versions.
With best practices applied, result is that the image will be functionally the same regardless of what system builds it, though normally it gets built and stored on a registry like Docker Hub.
The only chance a user has to screw things up is in setting environment variables. That’s no different than ever before. At least now, they don’t have to worry about system or language-level dependencies introducing a breaking change.
Same. Frustrating when you have to go digging for non-Docker instructions.
If it’s an open-source project, usually the dockerfiles are available for reading.
Do you audit every line of code that you run in production? If you are trying some new python/django/sql app, are you reviewing all that?
I’d assume with a python based project, you’d be able to at least look at requirements and tell there’s something that sets off red flags. And you are either familiar/trust the maintainer, or you are reviewing the actual python itself?
Beyond that, the dockerfile is essentially just installation instructions for getting it running on a virgin system of X distribution. I wouldn’t call that a black box.
If the container isn’t part of an open source project, then this is a moot point then. The project itself is a black box.
You do you. Speaking for myself, I prefer to understand and be able to trivially inspect and modify the moving parts in the things I deploy so I have a snowball’s chance in hell of debugging and fixing things when something inevitably goes wrong.
All I hear is FUD.
And all I see is someone taking this conversation way too personally.
You sound like someone who doesn’t want to save 10 minutes of work every day because it might cost you half an hour every month.
deleted by creator
Does Qemu work for you?
Virtualization in general? Sure, I can. I’ve tried it a bit with bhyve. But it’s definitely a lot heavier since I’m now running a full Linux os and dedicating resources to it to run docker just to run a python or node app.
Learning the project is in Go though is a sigh of relief. Professionally I’ve moved to Go (from Python) just because it’s so damn easy to build and distribute.
I just wish there was better support for the other *nix’s. While the language support them just fine, docker on the other hand strangles it. =(