A vision for loosely-coupled and high-performance serverless architecture

Explore the key missing pieces required to achieve loose coupling between services and low end-to-end latency in your systems

When we talk about serverless architectures, we often talk about the natural fit between serverless and microservices. We’re already partitioning code into individually-deployed functions — and the current focus on RPC-based microservices will eventually change into event-driven microservices as serverless-native architectures become more mature.

We can generally draw nice microservice boundaries around components in our architecture diagrams. But that doesn’t mean we can actually implement those microservices in a way that achieves two important goals: 1) loose coupling between services, and 2) low end-to-end latency in our system.

In this series of posts, I’ll explore the key missing pieces using AWS as an avatar for all serverless platform providers.

Service Discovery is an essential part of a modern microservice architecture. The lack of Service Discovery as a Service as part of a provider’s ecosystem causes customers to implement their own partial solutions.

Because FaaS is billed by execution time, time spent waiting is money wasted — and synchronous invocation of other functions means double billing. However, despite steps in the right direction, asynchronous call chains are not sufficiently supported by providers’ platforms.

Event-driven architectures are a more natural fit for FaaS and serverless, but there are key difficulties with existing services, such as limited fanout and lack of checkpointing support, that prevent robust implementations.

On the cloud side, a microservice should control the APIs it exposes to other services and to clients. On the client side, there should be one cloud endpoint exposing an API that brings together all the services. These two goals are in conflict; existing API gateways don’t facilitate a good solution.

The ability to perform a controlled, phased rollout of new code is essential to operations at scale. Existing serverless platforms don’t provide this functionality at either the FaaS or API gateway level, and we need it in both places.

Permissions in serverless architectures are highly dependent on the providers’ IAM systems, which may use some mix of role-based access control, policy-based access control, and perhaps other schemes. These can present difficulties by coupling together infrastructure components between microservices.

In IaaS, availability zones allow customers to build resiliency in the face of provider incidents without incurring the high overhead of cross-region architectures. Serverless platforms are usually region-wide and therefore resilient in the face of incidents in the underlying IaaS, but they need an availability-zone like concepts to allow customers to be resilient in the face of software problems in the serverless platform itself.

Cloud Robotics Research Scientist at @iRobot

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store