Cloud / AWS / Azure / Docker /

Docker on Public Cloud - A consistent and repeatable way to build and deploy applications

Aaron Walker

14 July 2015

4 Minute Read

We first started using Docker because it represented a consistent and repeatable way to build and deploy applications.

Really what Docker does is take away a lot of the headaches often caused by managing and deploying applications.

Traditionally a developer builds the application and then provides the application to us in some form. We often have to then deploy the application into an environment that’s vastly different from the one in which it was built.


Docker provides a standardised way to package up these applications and run them in that environment, irrespective of the application’s running on a developer’s local workstation or in a cloud environment.

It’s essentially exactly the same image that’s running in those environments, creating a much more reliable, repeatable, and scalable process because the configuration and environment-specific elements have been captured in the container itself.

This is an attractive proposition because it makes managing and automating that process of building images and pushing out containers standardised. This means that what’s inside the container is less important. While of course we still need to pay attention to the container and the application, the management and deployment is much more standardised, ensuring less risk and more efficiency for development teams.


From a portability perspective, if we were deploying in a more traditional way a stack from Azure to AWS, or vice versa, there would be a reasonable amount of effort required to do that. With Docker however, we can simply log onto an existing cloud container and take a Docker container, migrate it to AWS or Azure, and run it there with no changes whatsoever.

This portability is one of the most attractive aspects of using Docker and one of the reasons we’re seeing it grow in popularity throughout the industry. This portability is also very important for enterprises who are looking for hybrid cloud solutions – for example instances on AWS and some with Azure. This means customers are no longer locked into a particular cloud provider when working with Docker. The portability is important for customers with hybrid cloud solutions, who make use of more than one cloud provider.

Docker is Growing in Popularity

Customers are coming to us having seen Docker – they already know about it and want to change the way they do things because of what they’ve heard.

We’re frequently reminded that our customers are technologically advanced and educated about the options available to them. They’re constantly reviewing things like their technology stack and how they develop and deploy apps. We’re pushing the boundary more with our customers in managing and deploying applications using tools like Docker on cloud platforms.

We are constantly looking ahead to make sure we’re ready for what’s coming over the horizon with new technology. While it’s still early days, Docker is fast becoming an industry standard for building and deploying applications and it is even more so with the announcement of the open container project initiative, consisting of all the major players like Amazon, Redhat, Microsoft & Google.

Docker is just a user-friendly way of using the underlying Linux container technology. The important thing about Docker is that it’s a way to build, package and deploy applications in a standardised way.

Challenges to Using Docker on Cloud Platforms

There are still many situations where Docker isn’t ideal, however the industry is working to improve these areas of weakness. For example, because Docker is a Linux only technology, it won’t run on Windows but instead needs a Linux container to run. This requires spinning up a Linux host with Docker installed in order to run Docker on Azure.

It seems that Microsoft is putting a lot of energy into overcoming challenges like this, including the ability to use the Docker ecosystem within Windows, which is great to see. Besides which, this does not represent a big challenge to us.

With AWS, the EC2 Containers still have limiting features for container or node failures. ECS is, just as most other things on AWS, very Amazonian focused and will be missing new features in the rapidly changing Docker world. Things that require escalated privileges are missing, but you gain other benefits that Docker does not support such as automatic registration with elastic load balancers.

Again, it seems that Amazon is investing in Docker, and would be ideal for those that are going to stay in the Amazon eco-system and not look at hybrid models. RedHat is also putting a lot of effort into Docker. It has recently introduced Docker as the backbone for OpenShift, and starting the enablement of PaaS services in conjunction with Kubernetes. Not all cartridges are yet migrated across to this new stack by Redhat.

With any new technology in its infancy there are teething problems and rough edges. It’s an extremely dynamic environment so the way you have needed to do things manually has been rolled up into higher level management tools. From a developer or management point of view these are easier to manage.

Rapid Changes

The way we were building and running containers even just 6 months ago were very different from the way we do things today, which shows what a fluid space this industry is.

We need to ensure we’re constantly looking forward and re-evaluating how we build, run, and manage containers.

For every customer we’ve used Docker with, we’ve done it differently, which has been not only a great learning experience but is also testament to the flexibility that Docker offers developers.

  • re:Invent 2020 Week Two re:Cap Read More
  • re:Invent 2020 Week One re:Cap Read More
  • The Guide to SaaS Management Read More
  • How outsourcing could be the difference for your SaaS company Read More
More Blog Posts