You’ve probably heard about serverless architecture by now, and you’re probably wondering, what’s going on? In this post, we’ll dig into what serverless really means for developers and system operators.

To frame the exploration, let’s take a quick peek back at Docker’s canonical example-voting-app which you may have used to try out some of the new features in Compose or Swarm. The architecture for this application is simple. It consists of a Python web application with two voting options, a Redis queue that collects votes, a .NET worker that consumes the votes and stores them in a Docker volume backed PostgreSQL database, which is then displayed through a Node.js web application.

Here’s a handy visual.

Building a serverless application with Docker_1.png


With the original model, all five of these pieces run on persistent containers. Let’s fire up the cluster and a take a look.

$ git clone
$ cd example-voting-app

In this case, let’s start the Compose stack in one window, so we can watch the process while checking on the various infrastructure pictures.

$ docker-compose up

The application runs itself on port 5000, while the results live at 5001. Let’s open those up via a separate terminal window.

$ open http://localhost:5000 && open http://localhost:5001

Great! What pieces are currently running? Run the ‘docker-compose ps’ command to take a look at the 5 containers we described above.


Try out the voting mechanism to get a sense of how the vote is moving through the containers, and via the Docker Compose logs. Here, the logs show the Python container recording the vote, the .NET worker first processing it, which is then stored on the PostgreSQL container.


You can get a sense of what containers are required to run this stack by checking out what the Docker engine is running, too.


That’s five containers running for five services, all of which are persistent. At least, for now.

Building a serverless application with Docker_2.png

Let’s see what we can do about those persistent containers with a serverless code update. Here’s what we’re looking to accomplish, with persistent containers in red, and ephemeral Docker functions as serverless components in green.

Building a serverless application with Docker_3.png

In this design the green blocks are Docker containers that run on-demand, when a particular function is called. This means that there are fewer long-running services to debug, and we’re also able to leverage the scaling capabilities of a Swarm. Let’s fire up an example.

There’s a lot going on here, so let’s take a look at the output first, and then we’ll build the stack ourselves. I’m also going to recommend that you actually run the instantiation of this program backwards, so you can more easily see what’s going on. Let’s break it down starting with the completed product, and we’ll work our way backwards to the start, and then I’ll have you run the Makefile out of sequence.

Trust me, you’ll see!

So, the output.

Shortly after firing up the stack via ‘make’, you’ll see the first container create:


This container is merely a method through which the initial entrypoint container runs, which contains a reference to a Golang image that is compiled ‘onbuild.’ You can see this in the Makefile as a reference here in the “build” step, which is called as the first step to the “run” step, which is the default.


Run on the command line, this would look like so:

This creates an image for the stack to use:


After that image is created, the Makefile runs the standard file via ‘docker-compose up —build’, which takes all of the serverless code and deploys it for us. Now we’ll see our two persistent containers that we’ll use to form the foundation of the serverless architecture.


Well done!

We should expect to see a few things. If we go to the voting page, we should expect to see a container blip in and out of existence as our vote is recorded. First we see that in the logs:


And then in checking the last created container:


The ‘/vote’ endpoint is a fairly simple Python Flask application, so it doesn’t stick around too long, and runs the record-vote-task.

The ‘/results’ endpoint is slightly more complex, which runs a Perl application that keeps instantiating and exiting containers as long as you have a webpage open to http://localhost/result/. It should look something like the following.

First, you have your container creation.

Then the container finishes.

To see this more clearly in action, I recommend running the Makefile manually, and in reverse. This will allow you to see the serverless pieces of software get built first via this command.

$ docker-compose -f build

Once that completes, you can run the ‘make run’ command locally to see the containers spin up.

$ docker-compose up —build

Great, now you’re finished! If you’d like to read more about how Docker is thinking about serverless application, I recommend checking out Ben Firshman’s Funker, which has an example of a Funker-specific implementation of our above application, which simplifies the stack further by removing the entrypoint.


I'd also heartily recommend checking out Alex Ellis' latest work on "Functions as a Service (Faas)", which I consider to be the a true invocation of the serverless method that we've been discussing. He's built a small application that you can test drive, complete with a Docker Compose file that will spin up an example.

Be sure to check out the development branch of his project to see auto-scaling, Prometheus alerts, more sample functions and Brand new UI. Here’s his blog post on the subject.


         $ docker-compose ps
          Name                         Command               State                      Ports
db                          / postgres   Up      5432/tcp
examplevotingapp_result_1   nodemon --debug server.js        Up>5858/tcp,>80/tcp
examplevotingapp_vote_1     python                    Up>80/tcp
examplevotingapp_worker_1   /bin/sh -c dotnet Worker.dll     Up
redis                       / redis-server      Up>6379/tcp


DevOps Insights Directly to Your Inbox!

Join thousands of your peers and subscribe to our best content, news, services and events.

Jesse White

VP Consulting, Financial Services

Jesse White is VP Consulting for Financial Services with more than 15 years of technology industry experience across financial services, healthcare, advertising, e-commerce, and IoT verticals. As a 10 year veteran of New York City’s vibrant “Silicon Alley”, he’s intimately familiar with delivering cloud solutions, effective team building, and Agile process adoption. As an early contributor to the Container ecosystem, he plays a foundational role in New York City’s open source community as founder of DockerNYC. Jesse has spent the last 5 years focusing on the intersection of automation, cloud computing, security and DevOps methodologies.

More Articles by Jesse