Every engagement metric from content reach to conversions is downstream from site speed. Good performance means good user experience, and expectations are ever rising. Research from Google estimates that most sites lose half their mobile visitors while the page is loading.
In light of this, I wanted to see what the numbers actually showed for how our platform (and others) measured up. Speed is one of our core value propositions, but does Pantheon actually deliver?
Luckily, this is an answerable question. Earlier this year I published some analysis of WordPress site speed on Pantheon using neutral third-party data. This fall, I was able to return to that research to take a broader look at the state of web performance for sites running on an optimized platform, vs. more vanilla or DIY infrastructure.
The results were unambiguous: compared to managed platforms, DIY infrastructure (including self-managed AWS instances, VPS’s, and commodity shared hosting) is significantly slower, and delivers meaningfully poorer user experiences as a result. I was glad to see Pantheon at the head of the class, but the stand-out finding is a gap between DIY and any platform. The difference there is more than double that of us vs. our competitors.
TL;DR - if you're doing it yourself, you may be doing it wrong. You may not be saving any money (see our DIY calculator)), and worse you could be paying a tax on all your other metrics in the form of user abandonment and frustration. If you want a report on your site's performance right now, use our handy speed test tool, or keep reading for more details on the impact of this data, as well as how we compiled the numbers.
To measure the raw speed of underlying infrastructure, we used the HTTP Archive dataset's time to first byte (TTFB) metric. This measures the time it takes for the first byte of web content to come over the network - the start of the rest of the performance race. From here on out, page composition and complexity, as well as the content itself, will have a significant impact on the user experience. TTFB is a pure measure of backend/platform performance.
What we see here isn't pretty for the world of DIY. The median TTFB for both WordPress and Drupal sites is nearly a second; ideally, you would want your page loaded and in front of the user by that time. Instead, they're just getting the first byte of data to the browser.
From a user experience standpoint, this is a full second spent wondering if there's even anyone home. There will be many more requests and a fair bit more waiting before any content is actually visible. And keep in mind we're using medians here; many sites in the DIY category take several seconds or more just to deliver. Not great.
To better understand the impact on user experience we turn to a different source, the Chrome User Experience (CrUX) dataset. While the HTTP Archive is the result of tests run on controlled hardware and networks — laboratory data, if you will — CRUX is compiled from users who opt in to sharing diagnostic and performance data from the Chrome browser. This means real-world CPUs, networks, latency, and more are all included in the results.
For this analysis, we used Rick Viscomi's method for bucketing the sites into categories of fast, average, and slow. After applying our platform filters the outcomes aren't quite as stark as our TTFB findings, but they're still quite clear:
Pantheon can't prevent slow user experiences entirely, as this still depends on the composition of the website and its content. However, if you start building your page nearly a full second sooner, you will end up with faster user experiences. Based on this data, using a platform will give 10% more users a fast/delightful experience, while DIY risks completely frustrating and turning off another 6% with a slow experience.
It's clear that WordPress and Drupal are much better off in general on a platform vs. being self-managed. If you have a website on a mission and you're running on a generic host or your own cloud infrastructure, you should be aware of your site performance, and its implications in terms of hitting your goals.
If you want to deliver a better user experience, you'll not only need to measure it, but also build consensus around its value. That means making the case for better site performance with data.
The same diagnostic tool that powers the HTTP archive is available at webpagetest.org, and you can inspect your browser's own performance report in the developer console to get the same data as in the CRUX results (at least for your location/network). However, if you're looking for something easier to read, or that you can share with less technical stakeholders, check out Pantheon's website speed report.
Both the HTTP Archive webpage performance data and CRUX dataset are freely available for anyone via Google's awesome BigQuery data analysis engine. You can recreate these findings for yourself, or use them as a starting place for your own analysis.
The first thing we did was fingerprint the platforms we wanted to examine. Because the HTTP Archive data-set contains every minute detail of each request, we're able to use HTTP headers, DOM elements, and many other elements to identify where a given website is running, and what technologies are in play. For example, you can definitively tell if a site runs on Pantheon if a response contains a
x-pantheon-styx header. Here's the query I developed for platform fingerprinting:
To keep things manageable, we ended up restricting the "platform" category to ourselves and our top competitor for both WordPress and Drupal sites: WPEngine and Acquia, respectively. I did spend some time initially looking at other platforms (e.g. Platform.sh, Pagely) but found there were not a large number of them in the data-sets. For ourselves and the main competitors, there were thousands of sites with data to compare, which gave me confidence in the validity of the data.
SELECT DISTINCT page, CASE WHEN is_automattic THEN 'automattic' WHEN is_acquia THEN 'acquia' WHEN is_wpengine THEN 'wpengine' WHEN is_pantheon THEN 'pantheon' END AS platform FROM ( SELECT page, REGEXP_CONTAINS(payload, "automattic.com/jobs") AS is_automattic, REGEXP_CONTAINS(payload, "X-AH-Environment") AS is_acquia, REGEXP_CONTAINS(payload, "x-pantheon-styx") AS is_pantheon, REGEXP_CONTAINS(payload, "wpe-") AS is_wpengine FROM `httparchive.requests.2018_10_01_desktop` WHERE page = url ) WHERE is_automattic OR is_acquia OR is_pantheon OR is_wpengine
From that point, it was a straightforward process to construct a fingerprinting query against the HTTP Archive requests details data, which I then used to build a table of domains and what platform they run on. Once that platform-mapping table was in place, I used it to query other data-sets within HTTP Archive and CRUX for performance analysis.
For example, here's a query to see what portion of experiences were fast (e.g. appeared in under 1 second) from the CrUX data based on a method previously shared in the HTTP Archive Disuss forum:
SELECT platform, app, COUNT(DISTINCT origin) AS freq, IF(form_factor.name = 'desktop', 'desktop', 'mobile') AS form_factor, ROUND(SUM(IF(bin.start < 1000, bin.density, 0)) / SUM(bin.density), 4) AS fast, ROUND(SUM(IF(bin.start >= 1000 AND bin.start < 3000, bin.density, 0)) / SUM(bin.density), 4) AS avg, ROUND(SUM(IF(bin.start >= 3000, bin.density, 0)) / SUM(bin.density), 4) AS slow FROM `chrome-ux-report.all.201808`, UNNEST(first_contentful_paint.histogram.bin) AS bin JOIN ( SELECT url, app, platform FROM `httparchive.technologies.2018_08_15_desktop` JOIN `pantheon-localdev.http_archive_scratch.platforms_w_unmanaged_2018_10` ON page = url WHERE category = 'CMS' AND app in ('WordPress', 'Drupal') GROUP BY url, app, platform) ON CONCAT(origin, '/') = url GROUP BY platform, app, form_factor ORDER BY freq DESC
You may also like:
- [DOC] The Quickstart Guide to Fast WordPress
- [BLOG] Speed Up Your Site with PHP 7.2
- [BLOG] New Workshop: Getting Started with Drupal