Containers, Serverless

Serverless is one of the fastest-growing technologies of 2018 and the market offerings around it are expanding accordingly! We sat down with Alex Ellis, creator of OpenFaaS, an open source Functions-as-a-Service framework that’s rapidly growing in popularity. Alex has been a Docker Captain since 2016 and recently joined VMware’s Open Source Technology Center to work on OpenFaaS full time.

Hi Alex, why don’t you give us the 30 second pitch on what OpenFaaS is?

Sure. OpenFaaS makes it simple to turn anything into a serverless function that runs on Linux or Windows through Docker Swarm or Kubernetes.

The target audience is developers. The idea is to make it as simple possible to create a function that is built for, deployed to and run on Docker Swarm or Kubernetes, while providing a workflow that integrates directly with the Docker ecosystem. Rather than throwing a zip file over a wall or editing in a web-form they can actually go ahead and build a Docker image from GitHub and then use the same artifact in dev, staging and production. By building on top of a container orchestration platform, you get a lot of built-in functionality such as self-healing infrastructure, auto-scaling and the ability to control every aspect of the cluster.

What motivated you to create it?

Well, architectural patterns are changing. We used to deploy code as a monolithic application, then as smaller microservices and serverless represents the latest architectural iteration: functions. This is incredibly exciting, but when I wanted to try functions out using AWS Lambda, I found that I had to sign up with my credit card, edit code in a web browser or upload a zip file built on my laptop through a web-form and it just didn’t feel right compared to an enterprise-grade CI/CD pipeline. As a Docker Captain I knew there was a better way of doing things.

So I created OpenFaaS because I wanted to create serverless functions that combined the learnings of our industry about using Docker to build, ship and run code, with as few restrictions as possible. I presented it at Moby’s Cool Hack competition at DockerCon 2017 and won! And the momentum has continued from there.

What are some of the reasons people would want to use it over, say, AWS Lambda?

I remember seeing a tweet by Gabe Monroy of Azure who said “Lambda is another PaaS workflow that happens to feature micro-billing and an event-based programming model.  But what happens when you need to pull the escape lever on that abstraction?”

OpenFaaS, by contrast, offers sane defaults along with a really high degree of customizability. You can write your code in any language you want (Lambda supports only Node.js (JavaScript), Python, Java (Java 8 compatible), and C# (.NET Core) and Go). You can override the cloud timeout and run it for as long as you want. You’re not limited to 300 seconds as with Lambda. Lastly, you can run it wherever you want! If you can run Kubernetes or Docker you can run OpenFaaS. You can run it on-prem, in a hybrid cloud, public cloud or even multi-cloud.

I’m excited about Kubernetes offerings like Azure’s AKS, Google’s GKE and Amazon’s EKS – they offer a fully managed cluster – just enter your credit card and drag the slider up or down for capacity.

The project is also currently licensed under MIT, one of the most permissive licenses so it’s a low-barrier of entry for start-ups and enterprises alike.

What’s the business case for use at larger enterprises?

What’s important for businesses is getting valuable software out quickly. OpenFaaS lowers the barrier for DevOps-like software delivery by providing best-practice defaults out of the box. So rather than an Ops team having to write and maintain a Docker file, OpenFaaS comes with pre-canned Dockerfiles with non-root users and the latest Alpine Linux base-images. Rather than having to implement their own metrics we’ve built them into the platform (via Prometheus). These metrics are used to drive function scaling. That means that you’re deploying and monitoring your functions in a consistent manner – you don’t have many different ways of managing a growing number of functions.

In large organizations you’ll often find a whole CI set up manually for every microservice, each of which has its own deployment pipeline. With OpenFaaS all your functions are the same ‘shape’ so you could push them all through a single CI system. Using the built-in metrics you can then quickly see what’s working and where the demand is.

The customizability of language and so on we mentioned before also works well for enterprises because it fits better with their existing processes, which makes trying out cutting-edge architecture ‒ functions ‒ as simple as possible. OpenFaaS would be great for a company that is currently completely on-premises, for example, that doesn’t use containers but wants to start moving to the cloud or to containers. With OpenFaaS, they could start to push code out and iterate on it within hours. All within existing enterprise architecture and without having to spend a lot on consultancy!

Ultimately, it makes innovation easier.

Who’s using it out in the ‘real world’ and what are they doing?

Anisha Keshavan, a neuroinformatics researcher at the University of Washington is using OpenFaaS to support her analyses of brain scans with machine learning and Python. OpenFaaS has an advantage here over cloud functions because there is no limit on the size of your function. AWS Lambda has a 60MB limit. But multiple GB are sometimes necessary for pre-trained machine learning models or really big Python libraries. The built-in OpenFaaS Function Store has several examples of pre-packaged machine learning models such as a function that turns black and white images to colour or “Imagenet” which identifies what is present in a photo.

Jason Leonard of Citrix has built out an automation portal to allow their test engineers to write Powershell scripts, manage role-based authentication around UI and test their network equipment and hypervisors by running functions through what they’ve built.

At VMware, I recently had a call with the Cloud Management team who are starting a prototype to use Terraform as an OpenFaaS function to automate infrastructure for up to 1,500 internal AWS accounts. OpenFaaS can do that because any binary, including Terraform, can be your function.

Is the project growing and, if so, why do you think that it is?

The project is growing at a phenomenal rate. No tool is one-size-fits-all, which is why Lambda, despite its well-earned success, doesn’t meet everyone’s needs so there’s room for other offerings.

One advantage we have is the incredible community that we’ve built up over the last 14 months, with people starting to build real products around it. This had made it one of the most popular projects on GitHub, with over 10,000 stars over the organisation. I think the success of the community has flowed from the core values that underpin the OpenFaaS project.

Firstly, it is developer-first. The framework has been built for developers. It has a really good CLI that has received a lot of attention from the community. The UI is also great, featuring a functions store that lets you take well-known functions (e.g. colourizing a black and white image) and deploy them with one click.

Secondly, it’s operationally simple. Operations guys aren’t going to have a heart attack because it’s all built on battle-tested cloud-native applications. Relevant Helm charts and YAML files mean that if you have a Kubernetes cluster you can run OpenFaaS with one command and within 60 seconds you’re up and running.

Thirdly, it’s community-centric. The focus is building developer love! Because true adoption just isn’t top down anymore, we’ve seen that with Docker over the last five years. The developers are making IT decisions and influencing those around them. I know a guy in Austin who is going into work early to work on a proof-of-concept and has managed to convince the architects at his company to start using OpenFaaS for C# .NET functions. A great example of developer love in action. This is another example of where we were able to outpace the crowd by delivering .NET Core 2.0 before anyone else.

Then there’s the thriving Slack community (featuring Derek, a bot that runs as an OpenFaaS function who makes sure all Pull Requests meet the requirements of the MIT license and developer sign-off whilst delegating fine-grained permissions to our contributors) and the #FaaSFriday Twitter hashtag helps us share our exciting projects and cool hacks once a week :).

What’s next for yourself and OpenFaaS?

It’s an incredibly exciting time because I just joined VMware’s Open Source Technology Centre (OSTC) to work on OpenFaaS full time! I’m building out a serverless team within the OSTC with our first hire already on-board and second requisition already out for UK and Europe. It’s a really fast-moving space, but our core values and community feedback act as a compass directing our roadmap and focus.

Many events generated by cloud vendors are hard and difficult to extract – this is deliberate because it encourages lock-in. One thing I’m working on is a generic event integration so that we can break events out from AWS through the use of AWS Simple Notification System (SNS) for instance. We’re also continually refining the user-experience from the CLI, through to the UI, docs and the popular function templates.

Finally, I’m building out OpenFaaS workshops, going to events around the world and mentoring our community leaders to speak at Meetups to spread the word wherever we are!

Great, thanks, Alex! Where can we learn more?

You’re welcome :). Check out www.openfaas.com or my personal blog on blog.alexellis.io to learn more.

x

SIGN UP TO OUR UPDATES

DevOps Insights Directly to Your Inbox!

Join thousands of your peers and subscribe to our best content, news, services and events.