7 container design patterns you need to know

Containers are popular right now because they help move applications forward in a consistent, repeatable, and predictable manner, reducing manpower and simplifying application management.

But how do you know if you are using containers correctly? This is where container design templates come in. Here’s what you need to know about container design templates and why you need them, along with seven common design templates to consider and how to choose the right one. best for your needs.

Design patterns exist to help you troubleshoot common container issues. They also provide a common language when communicating about application architecture. This way everyone can understand what is going on.

Design patterns ultimately help make containers reusable. The users of these containers will each give their own goal. There are times when I don’t need to have a complex setup to test locally, but at the same time, I don’t want to change the architecture so much that I lose consistency when testing. That’s why having a baseline is useful: to reuse containers and make things easier to test.

The great thing about these models is that you can combine them to make applications more reliable and more fault tolerant. Here are seven that your team should consider.

1. The single-container design model

Using the single-container model simply means putting your app in a container. This is how you usually start your container journey. But it’s important to keep in mind that this model is all about simplicity, which means that the container should only have one responsibility. This means that it is an anti-model to have a web server and a log processor in the same container.

Containers are commonly used for web applications, where you expose an HTTP endpoint. But they can be used for a lot of different things.

In Docker, you have the possibility to change the behavior of a container at runtime, thanks to the CMD and ACCESS POINT instructions. So I’m not limited to using containers for HTTP services. I can also use them for any bash script that accepts certain parameters at runtime.

By letting containers change their behavior at runtime, you can create a basic container that can be reused in different contexts. So you would use the single container model to expose an HTTP service or to reuse a script that you don’t want to worry about its dependencies. And that would be a good choice, as long as you keep in mind that containers should only solve one problem.

2. The sidecar design pattern

The containers should therefore have only one responsibility. But what about the use case I mentioned earlier, where you have a web server with a log processor? In fact, this is one of the exact problems that the sidecar model aims to solve.

Using the sidecar model means extending the behavior of a container. In our example log processor for the web server, the log processor could be a different container that reads logs from the web server.

The web server will need to write these logs to a volume. In Docker, volumes can be shared with other containers. It’s best to have this separation because it facilitates packaging, deployment, resiliency and reuse, and also because not all containers will need or use the same resources.

With this model, you divide your system into different parts. Each party has its own responsibilities and each solves a different problem. You are eat the elephant in small pieces.

3. The ambassador design model

If you are using the ambassador model, it means that you have a proxy for other parts of the system. It transfers the responsibility of distributing the network load, attempts or monitoring to something else. A container should have responsibility and be as simple as possible. For a container, communication with the outside world will simply be an endpoint. It won’t know (or care) if what exists is a set of servers or a single server.

This is the model you would use when you want microservices to interact with each other. They don’t know exactly where the other microservices are; they just know they can find them by name. And for that, they need a discovery service. This discovery can be at the DNS level or at the application level, where microservices register. Service discovery will be responsible for keeping only healthy services.

In Docker, this is possible because containers can live on the same virtual network. When you use Docker Composer and you link containers, it basically modifies the “hosts” file so that the call to a service is by name, not by IP address. In addition, Docker supports Environment variables to inject values ​​such as subdomains for a proxy server that you can change depending on the environment.

4. The adapter design pattern

Using the adapter model means that communication between containers remains consistent. Having a standard means of communication through a set of contracts helps you to always make requests in the same way and allows you to expect the same response format. It also helps you easily replace an existing container without the consumer or customer noticing, as the contract won’t change, just the implementation changes. You can also reuse this container elsewhere without having to worry about managing other application logs.

Analyzing logs from different sources can be tedious if you don’t have a standard format. When you have a container that works as an adapter, it will receive raw logs. It will standardize and store the data in a centralized location. The next time you need to consume the logs, you will have a consistent format, so it will be easier to understand, correlate, and analyze the logs.

The main principle here is that the adapter model allows a container to reuse a solution for a common problem in the system.

5. The leader election design model

If you use the leader election model, it means you are providing redundancy for container consumers who need highly available systems. You can see this pattern in tools like Elastic search, an open source stack. Elasticsearch’s architecture consists of multiple nodes, and each node will have chunks of data (shards) for replication and redundancy purposes.

When the service starts, a node is elected as a leader. If the service goes down, the other nodes elect a leader based on certain criteria, keeping the cluster healthy.

So how does this relate to containers?

Well, you can create a bunch of containers that communicate with each other without needing to discover the service. Elasticsearch containers will elect a new leader, and then you can create a new one in seconds, either manually or automatically, using an orchestrator such as Kubernetes. Doing the same with virtual machines or physical servers can take minutes or even hours.

6. The work queue design model

The work queue model requires you to break a large task into smaller tasks to reduce execution time. You can think of this as the producer-consumer problem. Suppose a user asks you to transform 1 million records. It will take a long time. So, to speed up the process, you would use the work queue model and turn the data into smaller chunks of 100 records each. The code that does the processing work can be packed into one container, and then you can spin 10 containers at the same time.

Containers are really useful for batch processes. You might have to worry about the ability of the resources to support concurrency, but if you don’t, there are tools or services like AWS Bundle that help you manage resources. All you need to do is provide a container and start a set of runtime jobs.

Containers will help you make code reusable and portable. But coordination is a problem best solved by container orchestrators.

7. The scatter / gather design pattern

The scatter / gather model is quite similar to the work queue model in that it splits a large task into smaller ones. But there is a difference. The containers will immediately give a response to the user. So instead of throwing in a bunch of tasks and forgetting the actual answer for a moment, in this model you would need to combine all the little answers into one. A very good example of this pattern is the Map Reduce algorithm.

To implement this model, you need two containers. The first will do the partial calculation which returns all the necessary small chunks (map), usually not in an orderly fashion. This container will then make a request to the second container you need, the one in charge of merging all the parts, to return data that is meaningful to the user.

With this template, you only focus on developing each part independently, and you can launch and use as many containers as you need.

Which design model to choose?

Which model you should choose from these seven depends on several factors. There is no miracle solution. Each design pattern has its own purpose and solves a different type of problem. In fact, you may want to apply more than one at a time in the same system.

These design patterns for containers allow you to focus on developing the mindset for understanding distributed systems. They give you the ability to reuse code and have fault-tolerant, high-availability architectures with optimized resources.

I just scratched the surface of each pattern. Hope you have learned enough to know which ones may be suitable for your application. I encourage you to further explore the models that you think will best target the issues you face.

Keep learning

Source link

Previous Learn the key principles of web service design
Next Design patterns in SystemVerilog OOP for UVM verification

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *