Serverless Hosting Fundamentals and Best Practices

Let’s be honest – no one really likes managing servers. Yet, we’ve all found ourselves SSH’d into a server at 2 a.m., babysitting CPU loads, patching operating systems, nervously watching traffic spikes during a big launch, or, avoiding thinking about all that and waiting for the server to explode while we’re not looking. And for a long time, that was just part of the deal.

Thankfully, things are shifting. There’s a new way of thinking about hosting where servers don’t slow you down or get in your way: serverless hosting.

And despite the name, it's not about getting rid of servers (spoiler: they're still there). Rather, it simply refers to an infrastructure setup that frees you from managing servers, helping teams build faster, ship quicker and scale automatically.

Let’s unpack what serverless hosting really is, why it matters, when to use it (and when not to) and how it fits into a modern web stack – especially for platforms like WordPress and Drupal.

What is serverless hosting?

Serverless hosting sounds a little misleading because, despite what the name suggests, “serverless” just means the developer doesn't need to manage or even think about the servers. The infrastructure is abstracted away entirely.

Behind the scenes, there are still physical or virtual servers running the code, but these are fully managed by the cloud provider (like AWSGoogle Cloud or Azure). They take care of all the server setup, maintenance and scaling. 

And yes, the serverless model is carving out a place in content-heavy ecosystems like WordPress and Drupal, where developers and marketers are both demanding more speed, security and flexibility. It’s not plug-and-play, but with the right setup (especially for headless or static sites) and a little effort, it can work well.

Traditional vs. serverless hosting

Traditional hosting often requires you to rent or own a physical server or a virtual machine (VM), where you have to decide how much computing power, storage and bandwidth you need up front. Also, you’re paying for those resources whether or not your application is actually using them fully. 

But with serverless, the system scales automatically depending on demand. If your site gets a sudden spike in traffic, the cloud provider just adds more resources without you lifting a finger. When the traffic drops, it scales back down, saving you money by only charging for the actual resources you use.

Essentially, this is possible because serverless hosting is event-driven, which means your code will only run in response to specific events (like a user clicking a button or uploading a file). Each event triggers a small piece of your application code (function), which runs in an isolated environment (container) without any memory of previous code executions (stateless).

Top cloud providers offering serverless hosting

AWS Lambda

Image

A screenshot of AWS Lambda’s homepage.

Lambda is Amazon's flagship serverless compute service that runs code in response to events without requiring server management. It automatically scales applications by running code for virtually any type of application or backend service with zero administration. 

Lambda is also incredibly mature and tightly integrated into the broader AWS ecosystem. So, if you're already using services like S3DynamoDB or API Gateway, Lambda is a natural fit. 

It uses a pay-per-use model based on the number of invocations and the duration and memory used by your function. This makes it particularly cost-effective for sites with irregular traffic patterns. Plus, there’s a free tier that includes 1 million requests and 400,000 GB-seconds of compute time per month.

However, although it's built for scale and performance, it can feel a bit heavy for newcomers, with lots of AWS-specific tooling and a learning curve to match.

Azure Functions

Image

A screenshot of Azure Functions’ product page.

Azure Functions is Microsoft’s version of Lambda that allows developers to build applications while using less code and infrastructure, thanks to its rich set of triggers and bindings. Triggers define how a function is invoked, while bindings make it easy to connect to other Azure services for input or output. For instance, you could have a function triggered when a file is uploaded to Azure Blob Storage, and the output binding could directly save processed data to a database, without you writing the boilerplate code to connect to Blob Storage or the database. Azure Functions handles that wiring for you.

If your team works in .NET or you're running apps tied into Microsoft services like Active Directory or Dynamics, Azure Functions might feel like home. It also fully supports open-source stacks (Node.jsPython, etc.), so you can use it for any kind of application.

The pricing model is similar to AWS Lambda – you pay per execution and resource usage under the consumption plan, with a free grant of millions of executions per month.

Cloud Run functions (formerly Google Cloud Functions)

Image

A screenshot of Google Cloud Run Functions’ product page.

Cloud Run functions is Google Cloud's latest evolution in serverless computing, merging the simplicity of function-as-a-service (FaaS) with the scalability and flexibility of Cloud Run. It’s clean, simple and designed to play well with FirebaseBigQuery and Google’s AI/ML tools. It allows developers to run code in response to events like HTTP requests, cloud storage changes or Pub/Sub messages. Plus, it offers automatic scaling and a simplified development experience.

While developers still write and deploy source code, behind the scenes, the service automatically builds this code into containers and deploys them as Cloud Run services. This architecture gives developers the simplicity of serverless functions with the power and flexibility of containerized applications.

Again, just like AWS Lambda and Azure Functions, you are billed based on the number of invocations and the resources (memory and CPU) used during function execution.

When serverless makes sense (and when it doesn't) 

Serverless is a great fit when you’re building microservices like API endpoints, form handlers, image processing, scheduled jobs, etc. For example, you can create a payment webhook that triggers a serverless function to update order status or notify the user.

You’ll also benefit from serverless architecture if you’re working on projects with variable/unpredictable traffic or have a JAMstack (JavaScript, APIs, Markup) setup, headless CMS environment, or CI/CD workflow where speed and modularity are a must.

However, it may not be suitable for the following cases:

  • Long-running processes: Serverless platforms typically impose limits on the maximum execution duration of a function. This could be an issue for tasks such as complex data processing, video encoding or simulations that require prolonged computation.
  • Cold start latency: When a serverless function is invoked after being idle for a while, it can experience a delay in starting up, known as a cold start. This latency can be problematic for applications that require low or predictable latency, such as real-time applications.
  • Stateful applications: Serverless is inherently stateless. If your application requires maintaining state across multiple function invocations, it can become cumbersome to manage the state externally, such as storing session information in databases or storage systems. Complex workflows that require in-memory state or session persistence might not perform well in a serverless environment.
  • High-performance/resource-intensive workloads: Tasks that require substantial computational power, such as machine learning model training, scientific simulations or high-resolution video processing, may not run efficiently or effectively in a serverless environment. Serverless platforms might not provide the required CPU, memory or networking capabilities for these kinds of tasks.
  • Vendor lock-in: Since they are typically tied to a specific cloud provider’s infrastructure, transitioning to another provider or maintaining a multi-cloud strategy could become complex and costly. This can be an issue for organizations that prefer more control over their infrastructure or want to avoid the limitations that come with being tied to a single cloud provider.
  • Complex orchestration: While serverless is good for isolated tasks, orchestrating multi-step processes across different services or handling long-running transactions with consistent state can be challenging. These applications often require external orchestration tools (like AWS Step Functions), which adds additional complexity to the system.

Serverless hosting best practices

Architect for microservices and statelessness

Designing your application as a collection of small, independent services, known as microservices, allows for better scalability and maintainability. Each microservice should be designed to perform a specific function and communicate with other services through well-defined APIs.

Also, statelessness is a fundamental principle in serverless architectures. Each function should operate independently, without relying on any internal state between executions. This approach ensures that functions can scale efficiently and be executed in parallel without conflicts.

Optimize function performance

To enhance the performance of serverless functions, it's crucial to minimize cold start times. Cold starts occur when a function is invoked after being idle, leading to a delay in execution. Reducing the size of your deployment package and optimizing initialization code can help mitigate this issue.

Additionally, adjusting memory and timeout settings based on the function's requirements can lead to cost savings and improved performance. Over-allocating memory can increase costs, while under-allocating can lead to timeouts and failures.

Implement robust security measures

Security in serverless architectures requires a proactive approach. Implementing least privilege access ensures that each function has only the permissions it needs to perform its task, minimizing potential attack surfaces.

Using secure API gateways can help manage and secure access to your functions. Incorporating authentication and authorization mechanisms, such as OAuth 2.0, ensures that only authorized users and services can invoke your functions.

Also, regularly update dependencies and use secure coding practices to further enhance the security of your serverless applications.

Adopt effective deployment strategies

Implementing blue-green deployments allows for smooth transitions between application versions, reducing downtime and minimizing the risk of introducing errors. In this strategy, two identical environments are maintained, and traffic is switched between them during deployments.

Alternatively, canary releases involve rolling out new versions to a small subset of users before full deployment. This approach helps identify issues early and ensures that the new version performs as expected.

And, of course, automating deployment processes through CI/CD pipelines ensures consistent and reliable releases, reducing human errors and accelerating development cycles.

Monitor and log extensively

Monitoring key performance metrics, such as execution duration and error rates, enables proactive detection of anomalies and performance bottlenecks. Setting up alerts based on these metrics ensures timely responses to potential issues.

Consider leveraging centralized logging to aggregate logs from all functions, making it easier to monitor and troubleshoot issues. Tools like AWS CloudWatchAzure Monitor and Google Cloud Logging can be utilized for this purpose.

Distributed tracing tools, like AWS X-Ray or OpenTelemetry, provide insights into the flow of requests across services, helping to identify latency issues and optimize performance.

How Pantheon delivers WordPress and Drupal without infrastructure headaches 

What if you want the benefits of serverless hosting for your WordPress or Drupal websites, but without the complexity? 

Well, that’s completely possible because at Pantheon, we took the serverless philosophy – automated scaling, on-demand performance and zero maintenance – and baked it into a platform purpose-built for open-source CMS. No servers to manage and no plugins to wrangle for performance tuning. Just click, push, deploy, and Pantheon handles the rest

Here’s how we do this:

  • Container-based architecture: Pantheon's platform utilizes a container-based system to run WordPress and Drupal sites. Each container is a highly tuned PHP-FPM worker connected via an nginx web server, handling both static and dynamic requests. This setup allows for efficient scaling and management of resources without the need for traditional server maintenance.
  • Automated workflows and maintenance: Routine tasks such as security patchesbackups and scaling are automated. This reduces the manual workload on development teams, allowing them to focus on innovation and content creation rather than infrastructure management.
  • Global content delivery network (CDN): Pantheon’s built-in global CDN ensures fast content delivery by caching assets closer to end-users. This enhances site performance and reliability, especially during traffic spikes. 
  • Integrated development tools: Tools like Multidev environments and a command-line interface (Terminus) facilitate development workflows by enabling teams to create, test and deploy changes efficiently, supporting agile development practices. 

Choose the right solution for you

Serverless hosting isn’t just a trend but a shift in how we think about building for the web. But like any shift, it’s not one-size-fits-all. The key is choosing tools and platforms that meet you where you are and help you grow into where you’re going.

At Pantheon, we believe the future of web development is composable, agile and, yes, serverless in spirit. So if you're ready to level up your hosting, ditch the servers, keep the power and start using a platform that’s built for the way modern teams actually work.

Want to see Pantheon in action? Watch our demo now!