Steve Persch, Director, Developer Relations Reading estimate: 5 minutes
Decoupled Architectures: Cutting-Edge Contradictions in the 2020s
The best practices of web development always move. As soon as a best practice seems solid for one team or one role, the conditions and pressures that led to the best practice's rise will change again. And then the best practice doesn't make much sense any more.
Those pressures often push in different, sometimes directly opposing directions. That's why for the remainder of the 2020s, those working on the figurative bleeding edge of website operations will resolve seemingly contradictory needs to make their tools:
- Bigger and smaller
- More distributed and more centralized
- More personalized and more standardized
In my previous blog post in this series, I highlighted the prime needs of each of the broad personas involved in a professional web team. Someone (usually in an IT role) has got to keep the site stable and secure. Someone else (often in a Marketing role) needs to show the site is achieving business results. And others (often developers and designers) need to show they are completing tasks.
Ideally, all of these needs add up and complement one another. But in practice, it is not so easy. How can web teams make something bigger and smaller at the same time?
Bigger and Smaller At The Same Time
Satisfying the need for stability within the website operations world over the last few decades has pushed the size of the things we deploy to be smaller and smaller. There was a time when almost any serious website ran on its own physical server or servers, often in the same building as the rest of the organization. The people running the website were measuring its "size" not only in megabytes of storage and bandwidth but also the physical square footage footprint.
Smaller responsibilities, faster deployments, faster start times
Very quickly very many teams saw that they very much did not want this level of responsibility. Yes, there was some comfort in having full control of servers on premise. But usually, the need for stability was better met by handing over responsibility for the electricity, the ethernet cables and more to a company like Rackspace or eventually public cloud providers and SaaS solutions like Pantheon.
The decades-long trend here – from servers to virtual machines to containers to the next smallest unit of delivery – is to minimize the size of what web teams are handing over to infrastructure providers. At each step that hand-off gets faster and faster. Deploying a container is significantly faster than provisioning a virtual machine. And the "cold start" times that annoy container-based solutions are melting away in the next generation WebAssembly and V8-based deployments.
The backlash: More responsibilities in larger web frameworks
With deployments getting orders of magnitude faster through this fundamental shift in the 2020s, expect teams to look skeptically at other techniques used in the 2010s to make deployments smaller and faster. The 2010s saw a trend toward microservices. For large companies deploying large applications, it made sense in some cases to treat the /users API route or website path as a separately deployable codebase from /news-articles. That might make sense for instance if different teams are working on those two separate routes. And separating them makes each codebase a bit smaller and therefore faster to deploy.
But teams can get lost in microservices. For many teams, especially those that think of their web presence as one website controlled by one web team, there is probably more harm than benefit in treating /users and /news-articles as separately deployable codebases.
In addition to the routes of any given project reconsolidating to be more centrally managed, the front-end frameworks on the rise now (like Next.js) also consolidate common decisions by templating libraries, state management, routing and directory structures. In the early and mid-2010s, it was en vogue for each project or team to make their own cocktail of decisions. Now, after more than a decade of front-end framework churn, more people are more than happy to centralize those decisions and responsibilities into larger and more opinionated frameworks.
Centralized Corporations and Distributed Resources
That trend towards centralization visible in front-end web frameworks is mirrored in the number of companies a web team needs to pay or use just to run their site. Many small or niche companies that grew in the 2010s have been acquired by larger companies looking to act more like one-stop shops (although we're still far from any company actually being a one-stop shop for large web teams).
StagingPilot was a popular tool for automating the updates of WordPress plugins. We at Pantheon acquired StagingPilot in 2019 in part because we saw that teams would prefer to get this functionality from a platform than pay a separate bill for it.
The front-end ecosystem made npm an indispensable tool in the 2010s. But did package management need to be a stand-alone company? GitHub acquired them in 2020.
Turborepo saw the trend toward centralizing codebases in monorepos. Does that need to be its own company? Vercel acquired Turborepo in 2021.
CronJobs for the JAMStack also sounds like a cool idea. But again, one that could come from a platform in 2022.
While platform companies amass and centralize features, sometimes through acquisition, they also face strong pressure to distribute their presence.
- The trend towards remote work means these companies, Pantheon included, can't centralize brain power in a San Francisco Bay Area office.
- Rising uptime expectations push companies to operate across many data centers.
- Rising speed expectations push more website rendering responsibilities to the edge.
Is it a coincidence that in the same years of overwhelming pressure to move human decision-making out of a centralized office and to the WFH world, there is also pressure to move website rendering from central data centers to the edge of a CDN?
Personalized Content and Standardized Infrastructure
The prime example of pushing website rendering decisions to the edge is personalization/geotargeting. If the web team for a sports site wants to show different versions of the homepage to people on the West Coast (who might be more interested in the Golden State Warriors) than from people on the East Coast (who might be more interested in the Nets), that variance is now done better and faster at the edge than with the latency of round trips to a central data center. At Pantheon, we can enable this kind of functionality through our AGCDN service and you can hear more of what we have in mind here in our DrupalCon demo.
The counter-pressure I see to this kind of personalization (for website visitors) is the need to standardize (for website developers). It can be extraordinarily confusing for any given web developer to think through the implications of doing some website rendering
- In the browser
- In a CDN
- In the central data center
The fewer variations a person needs to think through depending on What Computer Assembles The Website (blog post) (video) the more mental space they will have to focus on the reason the site exists. To be effective, every web professional needs enough time to think through how to apply their own individual skills and expertise alongside the skills of their co-workers to reach their shared goals.
Closing Advice
The interplay of specialized expertise and shared goals makes website operations a team sport. That sport got significantly more complex in the 2010s as every player on the team saw their positions (front-end developer, designer, etc) get harder. Going forward in the 2020s, I recommend that every web professional spend a little (or a lot) less time worrying about whether or not their own personal output matches the ever-churning definitions of "good" that come from Twitter discourse and more time on team goals.
That's easier to do when we understand the history we came from, the big picture of how our work comes together (part 2 in this series), and the pressures we're all under (part 3 in this series), we can more effectively succeed with the state of the art and not fall off the cutting edge (part 1 in the series).