PHP Containers and Density at Pantheon

Websites for Everyone!

The world just passed one billion websites. That’s one site for every seven people on the planet, nearly one for every three people actually on the Internet. These all need a place to run.

Development and QA Environments for Everyone!

These sites also all need a place for developers to build and test them, hopefully not in the production environment. This has gotten harder as we’ve moved from desktop users and predictable, simple HTML files to today’s multiple devices (desktops, phones, and tablets) and complex stacks where one syntax error can take down a whole site. A place to develop and test against the whole stack—including accurate performance results—has gone from an extravagance to a necessity. Assembling such a stack on a local machine is time-consuming and incapable of properly simulating performance characteristics of the production environment.

The 1990s

  • Flat HTML files
  • Bugs affect single pages
  • Pages edited on desktop and uploaded to server
  • No mobile device use

The 2000s

  • Dynamic sites on basic stacks
  • Dev and QA on desktop
  • Bugs can affect entire site—but can be troubleshooted with WAMP/MAMP
  • Limited mobile device use, may not need to test


  • Advanced stacks
  • Dev on cloud or desktop
  • QA on cloud
  • Bugs can affect entire site—requiring the full production stack to troubleshoot
  • Social and other integrations can’t run from desktop
  • Must load pages from mobile devices to test

Traditional Options: Tradeoffs at Every Turn

Despite the increased pressure on developers to build more sites that are more complex ever more quickly, the options for deploying Drupal and WordPress are often the same as five years ago. Every traditional approach comes with some tradeoff like lack of scalability, a high cost, or the inability to test deployments properly before they reach production.

Shared hosting


  • Starts at about
  • $10 per month
  • Efficient
  • Familiar


  • Single points of failure
  • Poor isolation between sites and customers
  • No scale-up with traffic

Single virtual machines


  • Familiar
  • Reliable isolation between customers
  • Reliable isolation between sites


  • Single points of failure
  • Starts at about a hundred per month
  • No scale-up with traffic
  • Expensive with many sites



  • High availability
  • Scales up with traffic
  • Reliable isolation between customers


  • Complex to configure
  • Poor isolation between customer’s own sites
  • Starts at thousands per month
  • Development options
    • Duplicate cluster (cost)
    • Unrepresentative environment elsewhere
    • Co-deployment with production (risk)

Virtual Machines: the Incandescent Bulbs of the Internet

But with modern cloud servers, what’s the problem? We can spawn virtual machines and build clusters for every need: developer sandboxes, staging environments, and production. While VMs get the job done, they’re not efficient in practice.

Let’s compare against a known paragon of inefficiency: traditional light bulbs. They convert only 5% of the energy they consume into visible light; the rest is just heat. Being 5% efficient is so bad compared to alternatives like fluorescent and LED that many governments around the world have banned further imports or manufacturing of the devices.

A survey of utilization on EC2 (using side-channel effects visible from the guest OS) found average utilization at 7% of the host CPUs, barely better than the bulbs (not that the technologies are substitutes for each other). However, every watt used by a computer also turns into about a watt of heat, which, unlike for heat in most of the world’s homes, usually finds its way out of the building through air conditioning, which we’ll say is 90% efficient in the future. So, excluding all other data center overhead, EC2 is running about 3.3% efficient on average in terms of CPU capacity (with a maximum achievable efficiency of 47% if the CPUs were running at 100% all the time they’re running).

A typical EC2 instance’s host CPU utilization:


All of this efficiency wouldn’t matter much if data centers weren’t using much energy, but they use over 2% of the US power grid.

Nothing technically prevents direct, efficient use of EC2 VMs directly. The ability to spawn and tear down VMs as-needed should be showing us the best possible use of efficiency, but VMs have limitations preventing on-demand use in practice:

  • Heavy state: migrating an instance from one host to another involves gigabytes of data
  • High overhead: abundant use of disk and memory for each guest OS
  • Slow spin-up: when seconds count, your next virtual machine is only minutes away
  • Resources aren’t granular: adding and removing entire VMs changes capacity so dramatically that scaling down can be risky
  • Non-uniformity: deploying to networked clusters is still expensive for development purposes, so developer sandboxes are still, often, unrepresentative

Of course, the reason why incandescent bulbs face bans is because much better technologies exist, especially the aforementioned fluorescent and LED options. This is where containers come in.

Containers: the LEDs of the Internet

Using containers doesn’t change anything physical about the data center: it’s still the same servers, air conditioning, and power grid. What it does change is how practical it is to milk every bit of running hardware.

Directly comparing against the rigidity of cloud VMs, containers have practical advantages:

  • Light state: at most, a container has the actual application runtime and its data, no OS
  • Low overhead: containers can be deployed more densely because all that runs in each one is the application, not a guest OS
  • Fast spin-up: even copying a larger container image and running it takes seconds
  • Granular resources: it’s much easier to spin down capacity when it’s a small amount at a time
  • Uniformity: deploying networked containers is cheap, allowing every developer to have a representative stack

On Pantheon, our containers take between 10-40 seconds on average, depending on type:

This agility to quickly migrate and spawn containers allows Pantheon to treat host CPU, memory, and disk as an aggregate “budget” in capacity planning on our “endpoint” container host machines:

Pantheon Is Far from the First Widespread, Production Use

The idea of maximizing efficient use of computing resources has been around since mainframes and batch processing. Containers, as we define them, are long-running, multitasked jobs with resource and security isolation—but without traditional guest operating systems.

  • 1986 AIX 6.1 with Workload Partitions
  • 2000 FreeBSD 4.0 with Jails
  • 2005 Solaris 10 with Zones
  • 2007 Google lands cgroups in the Linux kernel
  • 2010 systemd makes widespread cgroups use possible
  • 2011 Pantheon builds and launches its container-based platform
  • 2013 Docker and CoreOS
  • 2014 LXC 1.0, Kubernetes, and Rocket
  • 2015 The App Container Spec, public container clouds

In 2015, we’re finally seeing enough standardization that companies now offer “container clouds” with APIs similar to the one EC2 introduced for virtual machines. We think there’s a long way to go still, including current approaches for running and orchestrating containers that will fall into disuse and new approaches yet to be written.

This post is based on David's presentation at DrupalCon 2015. Watch the video here or view the presentation slides.

Topics WordPress Hosting, Website Technology, Drupal Hosting, Education, Growth & Scale, Drupal, WordPress