Containers introduce some intriguing usability, packaging and deployment patterns. These new patterns offer the potential to effect massive improvements to the enterprise application development and operations specialties. Containers also offer the promise of bare metal performance while offering some amount of isolation as well.

But can they deliver on that promise ?

Since earlier this year, the Performance Engineering Group at Red Hat has run huge amounts of microbenchmarks, benchmarks and application workloads in Docker containers. The output of that effort has been a steady stream of lessons learned and advice/guidance given to our product architects and developers.

  • How dense can we go ?
  • How fast can it go ?
  • Are these defaults "sane" ?
  • What NOT to do...etc.

Disclaimer: as anyone who has worked with Docker knows, it's a project under heavy development. I mention that because this blog post and video includes code snippets and observations that are tied to specific experiments and Docker/kernel versions. YMMV, the answer of course is "it depends", and so on.

Performance tests we've pointed at Docker containers

We've done a whole bunch of R&D testing with bleeding edge, "niche" hardware and software to push and pull Docker containers in completely unnatural ways. Based on our choice of benchmarks, you can see that the initial approach was to calculate the precise overhead of containers as compared to bare metal (Red Hat plans on developing bare metal container deployment with the Project Atomic stack).  Of course we are also gathering numbers with VMs to compare and containers in VMs (which might be the end-game, who knows...) via OpenStack etc.

Starting at the core, and working our way to the heaviest, pushing all the relevant subsystems to their limits:

  • In-house timing syscall benchmarks (including vdso), libMicro, cyclictest
  • Linpack, single and double precision, Streams
  • Various incantations of sysbench
  • iozone, smallfile, spinning disk, ssd and NAND flash
  • netperf on 10g and 40g, SR-IOV (pipework)
  • OpenvSwitch with VXLAN offload-capable NICs
  • Traditional "large" applications, i.e. business analytics and databases
  • Addressing single-host vertical scalability limits by fixing the Linux kernel and fiddling some bits in Docker.
  • Using OpenvSwitch to get past the spanning-tree limitations of # of ports per bridged-interface.
  • Scale and performance testing of various storage drivers andnetwork topologies.

All of these mine-sweeping experiments (lots more to come!) have allowed us to find and fix plenty of issues and document best-practices that we hope will lead to a great customer experience.

Docker Meetup

On July 31st, Red Hat hosted a Meetup event at our headquarters in Raleigh, NC.  I was fortunate enough to be able to present to a great group of DevOps folks on performance testing Docker containers.  Quite an enthusiastic and thoughtful audience!

I spoke at length about what we've seen from a container performance standpoint, and demo'd effectiveness of cgroups memory constraints on a container and how we've integrated tests of any kind into a git-driven workflow.

All of the reproducer code, Dockerfiles etc that were used in the demo are available on Github, the slides are here and the video is here:

(apologies for audio problems from 48:40-52:38)

It's not very often that a new technology comes up that creates a whole new column for performance characterization. But containers have done just that, and so it's been quite the undertaking. There are still many tests variations to run, but so far we're encouraged both by what we're seeing internally as well as feedback from customers and partners participating in our beta program.

That said, I have to keep reminding myself that performance isn't always the first concern for everyone (*gasp*). And that the packaging, development and deployment workflow that breaks the ties between host userspace and container userspace are equally impressive.

Last updated: February 23, 2024