In modern terms, the humble software application is a many-faced entity that comes in multiple guises, shapes and sizes. From small, simple pieces of code that run directly on a single machine to complex cloud-based services designed so millions of instances can be run and used simultaneously all over the world, there are apps that fit every possible description in between.
At the latter end of this scale, the ability to run the biggest and most complex apps quickly, efficiently and at scale across different environments today depends on development approaches like microservices and containerization.
Microservices and containers are ways of designing the structure or architecture of an application with the goal of maximising agility, scalability, portability across environments and so on.
Microservices splits all the functions contained in an app into separate, independent ‘services’ that can be scaled and combined flexibly, as well as being run in different parts of the stack to optimise performance. Containers wrap functions in “shells of code” that contain all the resources they need to run, a type of abstraction that makes them less dependent on the run environment and therefore delivers consistent performance wherever an app is hosted.
When containerized functions are pieced together into a microservices architecture, you get a fluid, scalable, resource efficient application that performs consistently wherever it is deployed. It’s this that allows some of the biggest, most complex cloud and web apps in the world – the likes of Google, Facebook, Amazon etc – to keep performing consistently at massive scale.
From orchestration to communication
For all the benefits this model brings, it is complex and technically challenging to achieve. One of the major challenges is how to make all the different parts – all the containers and individual microservices, which may number in their thousands or tens of thousands – work well together.
In container development, this is known as orchestration, and tools like Docker and Kubernetes have evolved to help manage the critical but mammoth task of coordinating individual containers into a coherent whole.
However, while Kubernetes and Docker help to automate orchestration tasks that would otherwise be impossible to provision manually, what they don’t necessarily do is help map or track interactions between different services, making it difficult to monitor or optimise performance.
To achieve these goals, developers working in microservices and containerised environments are increasingly turning to a different kind of coordination approach – service mesh.
Let’s use a simple analogy to explain what a service mesh is. Lots of us have been familiar with working as part of remote, distributed teams over the past 18 months as COVID-19 has kept us away from the office. A distributed team of colleagues is like an application built according to a microservices architecture – each individual is completely separate and autonomous, they don’t even share the same building/programming framework. But they work together to achieve a shared goal.
But of course, we all know that, for a remote team of individuals to be able to achieve a shared goal, there is one vital ingredient – communication. We can let people work from home or from wherever they choose, but we need a communication infrastructure to keep them connected and collaborating – the internet, shared cloud-based resources, instant messaging, telephones etc.
A service mesh is this communication infrastructure for microservices and containers. It forms a distinct purpose-built layer in the application stack designed specifically to manage data exchange between individual containers and services, and to document how well they interact together.
The benefits of service mesh
In broad terms, then, a service mesh provides the insight to monitor the performance of complex applications, which is critical to DevOps and continuous improvement approaches. But beyond that, a service mesh offers an alternative model (i.e. to Kubernetes and other orchestration tools) for managing and determining performance across the application in a way that makes life easier for developers.
For example, as a dedicated service-to-service communication network, a service mesh is ideal for applying policies which determine the behaviour of all components in the network. By defining network policies, you set rules and limits for how different services interact with one another, which also has the effect of dictating how individual services can and cannot behave.
In the highly complex networked environments that large microservices applications create, this allows developers to set parameters for resilience, security, QoS, performance analysis and more that will persist even in highly volatile conditions.
On the topic of security, for example, use of a service mesh allows networking concepts like Transport Layer Security (TLS) to be applied to application architecture. Microservices applications are trickier to secure because, instead of having a set of functions operating behind a single defensible layer, your have a web of interactions between individual elements, where each strand or connection poses a potential vector for attack.
A TLS approach delivered via a service mesh allows authentication, encryption, security policies and more to be applied to traffic between services, thus securing the strands of communication.
Likewise, resilience can be built into the application via the service mesh using networking concepts like rate limiting, circuit breaking, time outs etc.
There are drawbacks to service mesh. By adding an extra layer to the application stack, you are adding another layer of complexity on top of what microservices and containerisation represents. It requires developers to embrace yet more new concepts, this time borrowed from the field of networking.
Yet as application deployment and development becomes increasingly ambitious, these are things that programmers are will have to embrace more and more going forward. Just as software architecture design is just as critical a concern going forward as coding expertise, so the application developers of the future may have to be fluent in networking concepts as the most effective way to orchestrate and manage the most fluid, dynamic cloud-native services.