CDNs: The Battle of Fast vs. Fresh

David Strauss , Co-Founder & CTO Reading estimate: 5 minutes

To build a website today, there’s a lot to get right: performance, scalability, security (including HTTPS), functionality, search-engine rankings, and more. These come down to getting fresh content into the hands of site visitors—and doing it fast. CDN-based edge caching goes a long way to helping here, but it’s only as effective as the cache hit rate.

“When the cache hits” matters enormously, especially as site traffic climbs. Clearly, a site with no caching can cut existing origin traffic in half by achieving a 50% hit rate. But, the same is true for a move from 50% to 75% and even 98% to 99%. Refining the hit rate for a site—even one that already has an excellent rate—never stops delivering (proportionally) major benefits to the origin. That benefit can manifest as lower costs, better tolerance for traffic spikes, and better resilience when under attack.

Competing Priorities

Yet, requests can only hit the cache effectively if everything below is a “yes”:

  1. Is the resource being requested cacheable?

  2. Does the resource have a long lifetime in the cache?

  3. Is the resource in the cache fresh enough?

Traditionally, these goals have competed. You could prefer fast and stale (long lifetimes) or fresh and slow (less cacheable content and shorter lifetimes). This is because, for a given URL, a cache likes to hang onto that content until it’s lived out its lifetime (“Cache-Control: max-age”); only then does it check to see if there’s fresher content available. In terms of business value, this raises a question: “How stale am I willing to tolerate this page being in order to improve hit rates?”

There Are Workarounds (But They Aren’t Great)

To be fair, there are a couple workarounds. They just have trade offs or limited applicability:

  • By changing the URL for resources like images and CSS, it’s possible to ensure visitors experience high hit rates in the CDN without the risk of receiving stale content. This doesn’t work for web pages with stable URLs, and it requires every page embedding the old resource URLs to get invalidated, too.

  • By invalidating specific URLs or patterns, it’s possible to freshen the primary places where content appears, like the main article view. By using longer URLs and convoluted patterns, this approach can even clear related pages that derive their content from what changed. Ugly URLs are a high price to pay for cache hit rates, though.

Both workarounds try to twist basic aspects of HTTP caching to get control that it was never designed to provide, and it shows. There has to be a better way!

Specifying Cache Keys in Headers

Rather than serving up stale content or convoluted URLs, there’s a way to mark a response delivered from the web server according to what ingredients went into it. It’s sort of like putting a lot number on the responses—except you can add more than one. To invalidate the cache, the CMS specifies the “keys” that have changed.

The technology works like this (using Drupal in this case):

  1. A visitor goes to the front page of the site.

  2. The response contains nodes 1 and 16, so it gets a header:
    Surrogate-Key: front node-1 node-16

  3. An editor for the site alters node 16, and this causes invalidation of any responses where the Surrogate-Key header contained “node-16”, which includes the front page.

  4. A new visitor comes to the site and sees the front page instantly updated with the new node 16 content.

Hard in Most Places, Easy on Pantheon

Getting this working traditionally required a lot of steps:

  1. Setting up a reverse proxy cache (like Varnish) or an edge that includes CDN functionality (like Fastly). Proxy software may also require an extension like xkey.

  2. Configuring or coding the addition of Surrogate-Key headers in responses.

  3. Coding to respond to content changes and invalidate the appropriate keys.

  4. API or firewall configuration to allow the CMS to make invalidation requests.

And, because of all the services and integrations, this approach only gets us to fast, fresh, simple (pick two). This is better than just fast vs. fresh, but what if we could automate the complexity away and get the best of all worlds?

To make this a reality, we’ve built a plugin (for WordPress) and a module (for Drupal) that—when used on the Pantheon platform—handle everything above:

This approach works by using the built-in cache keying in WordPress/Drupal and integrating it with Pantheon’s container grid and Global CDN.

Smart Cache Keys: Not Quite a Web Standard

Unfortunately, there’s limited standardization in the world of HTTP for explicitly invalidating cached content. But, the situation isn’t too divergent; the Varnish Surrogate-Key approach is functionally identical to other implementations (where they exist).

So, this functionality on Pantheon’s Global CDN is similar to alternative services and open-source tools, but it’s not something a site owner can simply expect on every CDN or proxy cache. Using those other tools also requires manual setup and integration.

Uptime Advantages, Too!

Mostly, we’ve talked about the benefits of Surrogate-Key in terms of achieving fast, fresh content, but the approach also helps boost uptime. The internet is a noisy place, and backbones, peering relationships, and origins can all have issues from time to time. By storing content for a long time in a site’s CDN (and explicitly invalidating or purging it only when necessary), the CDN will happily “hit” far more often—including when a deeper level of infrastructure is having issues. When there’s a problem, this can make the difference between having a read-only site available to visitors versus nothing at all.

Taking a Holistic Approach

Edge optimizations tend to stack well. Faster HTTPS negotiation, higher hit rates, and optimized protocols (like HTTP/2) all work together to get pages displayed faster (time to first paint, or TTFP) on the devices of site visitors.

But, it’s easy to over-focus on particular optimizations. Using granular cache keys is a way to take a site from “good” to “great,” but it’s always a good idea to run a fresh analysis every time a site has major changes (both for optimizations and other changes). This will ensure that, at every step, you’re focused on the biggest possible improvement—or at least the lowest-hanging fruit.

Discover More

Opt In Now For a Faster Build Pipeline on Front-End Sites

Steve Persch
Reading estimate: 2 minutes

Pantheon Increases PHP Memory Limits for Performance and Elite Plans

Rachel Whitton, Developer Experience
Reading estimate: 2 minutes

Pantheon Includes Object Cache Pro for Painless Improved Performance

Steve Persch
Reading estimate: 3 minutes

Try Pantheon for Free

Join thousands of developers, marketers, and agencies creating magical digital experiences with Pantheon.