At the heart of the modern cloud is a tension between two different philosophies: IaaS, where you build a virtual infrastructure on a fabric of host systems managed by a cloud provider, and PaaS, where you write code for runtimes managed by your provider, targeting their service APIs. Both approaches give you a layer of abstraction from physical infrastructure and host operations, allowing you to focus on your applications.
Containers provide a middle ground between these two methods, letting you rely on platforms managed by cloud operators while allowing you to write more complex code and to package required applications and other dependencies. You don’t have to manage OS-level security or updates and you’re not limited to the languages and APIs supported by platform runtimes. It’s an effective compromise, with technologies such as Kubernetes providing the necessary container-level systems management tools.
It’s fair to say that Kubernetes is complex, even with tools like Azure’s Azure Kubernetes Service. That’s because it’s a tool for running distributed applications at scale, when your code needs to hold up under demand and you need to run at global scale, replicating nodes around the world. With AKS it’s possible to deploy clusters of containers and ensure that they stay running, deploying new versions directly from a CI/CD (continuous integration/continuous delivery) pipeline as and when they’re ready for use.
Another class of application could benefit from containers: the code that normally runs behind websites and services, where there’s no need for global scale (although ensuring reliable operations is good). It might not have millions of users, but hundreds or thousands of people will use that code, and more than one business may well depend on it.
Azure App Service: Azure’s original serverless platform
Azure’s App Service is one of the oldest pieces of Azure’s PaaS offering. It’s a runtime for web and mobile back ends, using common web development languages and tools. Code is written in your choice of editors and saved directly on Azure, where it runs on demand and is billed based on your choice of plan, from free instances for low-usage sites and services to plans with dedicated compute resources. Features vary from tier to tier, and you can ramp up or down as required.
It remains a useful tool, suitable for many different types of applications, with access to Azure services and external APIs. At the same time, it’s a useful introduction to working with serverless technologies, with API-based apps running on demand. Lessons learned in Azure App Service can help you understand how many other Azure platform features operate, as they share the same underlying worker-based VM model.
Hosting single custom containers in App Service
One important feature in Azure’s platform services can help with a migration to container-based cloud applications, where you only need to run a handful of container-based microservices and don’t need to respond to rapid changes in demand. Instead of spinning up a Kubernetes cluster and connecting it to your build process, you can do the same with Azure App Service, using it to host your container with a limited number of instances. It’s a good way to start thinking about how to containerize applications and services. At the same time, it gives you a low-cost way to test services and support users who only need access to one or two API endpoints.
A simple scenario for using containers with Azure App Service is lifting and shifting an existing web application to the cloud that’s been built using an alternate stack. Maybe you have been experimenting with Go or Rust as a web service host or are using an alternative web application language, such as Facebook’s Hack PHP derivative. Putting your application and its supporting environment in a container allows you to move it to the cloud without having to rewrite, giving you time to build a new version in a supported framework or to move to an alternate hosting model using serverless Functions or running it on AKS.
Getting started with Azure App Service containers
Working with a container image in Azure App Service is straightforward. Start with a container with your code in it stored in the Azure Container Registry. There’s support for both Windows and Linux containers, although in practice, most of your applications will be running on Linux. You can use Docker for Windows and WSL 2 to build and test containers, starting with applications running in WSL or in a Linux virtual machine. Docker’s tools help package code, stack, and dependencies and, if necessary, can provide base images for common Linux releases that can help speed up the process of moving applications from Linux servers to containers.
Microsoft’s own Visual Studio Code offers an Azure App Service extension, as well as support for Docker and remote editing and debugging tools that can run inside a container. Working in a familiar environment should help reduce risks in any initial deployment. You can sign in to Azure from inside Visual Studio Code and connect to the Azure Container Registry. This lets you deploy directly to Azure App Service from your editor, setting up resource groups and choosing a plan and region. Once it’s installed, you can use Code to edit, test, and debug your application.
Build, deploy, and run more complex applications
You’re not limited to a single container. Support for Docker Compose in the Azure command line tools allows you to bring container groups into temporary Azure storage before creating a multicontainer app in Azure App Service. This uses a Compose definition to deploy containers from an Azure CLI directory and then start your application.
Using containers like this can help deploy pre-existing web application frameworks to Azure app service. You can host applications such as WordPress along with their content databases and any preconfigured plug-ins. Again, it’s sensible to build and test groups of containers locally before deploying to the Azure Container Registry and then to Azure App Service. Any time a container in ACR is updated, Azure App Service will automatically pull it and update its running images. This approach simplifies building and running apps, as you can use ACR as the endpoint for any build pipeline and automate the process of updating apps.
Microsoft has recently opened up access to Azure’s storage features from App Service-hosted containers, letting you treat them as a permanently connected network share. Best practice for containers is to treat them as stateless, so using Azure storage for application data will make it easier to manage storage and use continuous deployment tools.
If you enable App Service’s continuous deployment feature, it will create and connect a web hook to ACR, ensuring that updated containers will automatically deploy. Alternatively, if you’re using GitHub, a set of preconfigured actions will deploy a container to ACR and then to Azure App Service once code has been merged, tested, and built. GitHub will assemble the Docker container that your app runs in, log in to ACR using stored credentials, and then transfer the file. The ACR web hook used by Azure App Service’s continuous deployment tools will identify the updated container and then deploy it.
Allowing containers to run in Azure App Service makes a lot of sense; Kubernetes is overkill for a small single container application that doesn’t need to scale, while Functions is for simple event-driven services. Azure App Service’s worker model gives you many of the advantages of serverless computing, and its container support hits the sweet spot between Kubernetes and Functions. As an added bonus, consumption-based pricing makes it more attractive than building and deploying your own virtual machines, especially without requiring the support overhead of maintaining VM images. If your code doesn’t need massive scale and fits into one or two containers, it’s an approach that’s well worth considering.