It’s phenomenally expensive from a practical standpoint, it takes an immense amount of engineering and devops effort to make this work for non trivial production applications.
It’s egregiously expensive from an engineering standpoint. And most definitely more expensive from a cloud bill standpoint as well.
We’re doing this right now with a non trivial production application built for this, and it’s incredibly difficult to do right. It affects EVERYTHING, from the ground up. The level of standardization and governance that goes into just making things stable across many teams takes an entire team to make possible.
In my experience using containers has removed requirements for additional engineering cost to deploy between providers because a container is the same wherever it’s running, and all the providers will offer container hosting, and most offer cluster private networking.
Deployment is simplified using something like octopus which can deploy to many destinations in a blue-green fashion with easy rollback.
Containers are nice, but don’t really cover things like firewalls, network configuration, identity management, and a whole host of other things, the configuration of which varies between providers.
I mean, technically, you could containerize all the elements you need. Firewall, load balancers, identity management, etc. but at that point, you are creating your companies own version of the cloud services that are generally one of the big draws to the cloud already since you aren’t directly developing and maintaining those systems anymore. Once you have made “aws lite” in container form, you can then deploy that directly to the compute instances on any cloud provider. But now you need to maintain everything like you were running on prem (i.e. more developers and network engineers again) all while paying a pretty penny to multiple cloud providers and now that your infrastructure containers need to run 24/7 instead of only having your compute resources being ran on demand your costs will skyrocket so at that point why not just move back to on prem hosting.
It’s phenomenally expensive from a practical standpoint, it takes an immense amount of engineering and devops effort to make this work for non trivial production applications.
It’s egregiously expensive from an engineering standpoint. And most definitely more expensive from a cloud bill standpoint as well.
We’re doing this right now with a non trivial production application built for this, and it’s incredibly difficult to do right. It affects EVERYTHING, from the ground up. The level of standardization and governance that goes into just making things stable across many teams takes an entire team to make possible.
In my experience using containers has removed requirements for additional engineering cost to deploy between providers because a container is the same wherever it’s running, and all the providers will offer container hosting, and most offer cluster private networking.
Deployment is simplified using something like octopus which can deploy to many destinations in a blue-green fashion with easy rollback.
Yes, containers make your application logic work.
That’s the lowest hanging fruit on the tree.
Let’s talk about persistence logic, fail forwards, data synchronization, and write queues next.
Let’s also talk about cloud provider network egress costs.
Let’s also talk about specific service dependencies that may not be replicatable across clouds, or even regions.
Oh, also provider specific deployment nuances, I AM differences, networking differences…etc
Containers are nice, but don’t really cover things like firewalls, network configuration, identity management, and a whole host of other things, the configuration of which varies between providers.
I mean, technically, you could containerize all the elements you need. Firewall, load balancers, identity management, etc. but at that point, you are creating your companies own version of the cloud services that are generally one of the big draws to the cloud already since you aren’t directly developing and maintaining those systems anymore. Once you have made “aws lite” in container form, you can then deploy that directly to the compute instances on any cloud provider. But now you need to maintain everything like you were running on prem (i.e. more developers and network engineers again) all while paying a pretty penny to multiple cloud providers and now that your infrastructure containers need to run 24/7 instead of only having your compute resources being ran on demand your costs will skyrocket so at that point why not just move back to on prem hosting.