Microservices are great architecture for various IT areas. It supports the successes of many organizations, but on the other hand is often subject to quite strong criticism. How to avoid possible problems and properly use the mechanisms provided by microservices?
The objectives of introducing microservices
We have already mentioned business expectations towards IT in the organization in the first part of the article. Let's remind ourselves – IT is to be able to support the growing number of customers, to keep the solution and data security, support for building new functions – in the field of innovation or integration with partners, and this all while maintaining cost effectiveness.
Microservices are to be the answer to these expectations. In the first two parts of the article we described the features of this approach. We will now try to address the expected benefits of implementing this architecture. Of course, expectations are different, below we have collected the most frequently mentioned set:
- fast and scalable creation of solutions,
- maintaining high ability to change,
- ability to frequent and fast software deliveries,
- ease of production maintenance,
- efficient use of computing resources,
- matching the requirements of the cloud.
Most of the benefits of microservices are presented in contrast to the opposing monolithic model. Of course, microservices and monolith are not the only two available models of solution architecture. However, in this article we will also sometimes refer to the monolith in order to more clearly present the differences in approaches of both these methodologies.
Do microservices really bring the expected benefits?
It depends. In our opinion, the considered architecture allows to achieve the expected benefits, but the naive understanding of microservices, as a division of the solution into small modules, unfortunately does not guarantee it. Many details are important, that only together, properly treated, allow to use the full potential of this technology. We will now try, one by one, to present the individual expectations and consider how to implement the microservices in order to fulfill these objectives as much as possible.
Fast and scalable solution creation
The business always expects the smooth implementation of ordered changes and generally quick delivery of ready-made solutions. It is often the case that changes are forced by the regulatory environment, or the business arrangements of various stakeholders are prolonged and little time is left for actual implementation. By delivering faster and working in parallel we can deliver more.
Do microservices allow to create solutions quickly?
Of course yes. It is very easy to create new modules, and it is also easy to make changes to the existing ones, because they are small and have a focused logic. The use of microservices also allows for parallel work of different teams. Each module is independent, so we have less communication. Smaller than in other methodologies, individual modules are easier to understand, modify, test and implement. Together this means faster delivery, and that is what we wanted.
Can you work quickly in monolith too?
In certain situations, yes. If we are a startup, at the beginning of software development we'll actually see the effects sooner creating a monolith. In a monolith we can get started faster, microservices require more initial investment in the solution architecture and work organization. The situation turns around when we are already a larger and more mature organization and our solution becomes more complicated. Due to the lack of independence of the modules, the development of the monolith slows down: frequent analysis of large business areas is required, regular arrangements are necessary, side effects appear. However, at the beginning of solution development, the monolith has its advantages.
Do you always have to create solutions quickly?
The faster the better. Theoretically. But usually it is that by increasing the speed of implementation we lose something. Fast may mean "without proper documentation", "without adequate implementation of cross-cutting concerns", "without full testing" or "without maintaining the consistency of the solution". We don't want that. Such an approach means difficulty in maintaining the ability to change, frequent code duplication and a slowdown in future development. In general, avoiding documentation is one of the biggest mistakes in system architecture at all.Insufficient quality of tests means failed implementations and errors visible to the clients.
Microservices require a lot. While the implementation of changes in this architecture may be very fast, ensuring the appropriate quality of the solution requires a lot of work. The advantage of this approach, however, is the possibility of experimenting with technologies, as well as obtaining quick feedback on the solution, even when it is not yet ready for production lunch.
What else to watch out for?
Team independence combined with the expected fast pace of delivery may mean that a single module will remain easy to maintain, but the system as a whole will get stuck in its complexity. In a microservice architecture it is easy to create new modules, but their integration with the entire system is often not straightforward.
It is worth remembering to watch out for all kinds of "shortcuts". Such as the code reuse of different modules or tight coupling of components (about which we wrote in the second part of the article). Often in the short term this allows to build a solution faster, but reduces the independence of services, and that makes future changes more difficult and prolonged. A solution done "quickly", as an experiment, can stay with us for a long time. Remember about documentation, so that we always have a full understanding of the solution architecture.
If we want a fast solution creation to be a constant feature of our system, it is also advisable to provide a certain framework for action. That is, for example, certain methods, standard for our solution, for services integration, security mechanisms, tracing or logging.
Scaling of software development
The division of the application into independent modules also allows more developers to work on it. The principles of loosely coupled microservices help a lot in this. Thanks to them we don't get into problems connected with the accumulation of backlog at the level of single people or teams.
The rules related to design (especially concerning vertical division of teams) or data independence, combined with automation of manufacturing and implementation processes mean that microservices respond particularly well to the need to scale software development.
It is worth remembering, however, that parallel work of developers is also allowed by modular and layered architecture in general, in particular SOA, and not necessarily only microservices.
The problem with scaling development, however, occurs in a monolithic solution, if it reaches a certain, more significant size. This model limits the ability of independent teams to work independently – it is necessary to coordinate tasks, deliveries and production implementations. Parallel changes in the same code, implemented by different teams, basically guarantee deployment and maintenance problems.
Maintaining a high ability to change
Creating solutions quickly, in the long run, would not be possible without maintaining a high ability to change. Such a feature of a solution results from many factors. In our opinion, the most important is the transparency of the solution – i.e. the ease of assesing the structure and operation of our system.
Solution transparency
The fast creation of new functions, introducing changes and the possibility of parallel work of many teams depends directly on the transparency of our solution. If we do not understand the system and responsibilities of individual modules, it is difficult to make modifications without the risk of errors and side effects. Individual changes can degenerate the system, making it difficult to develop and maintain the solution later on. As the size and scope of functionality increases, this becomes an increasingly serious problem. How can you evaluate microservices in this field? The answer is not clear.
How does it look like in a monolith?
In case of a monolith it is very easy to lose transparency. This is not due to the complexity of the architecture (which in the case of a monolith is generally simple), but to the many unclear relationships, which are a consequence of the lack of independence of various functional modules. The monolith, on a larger scale and after a longer period of development, becomes a set of numerous requirements, correctly or incorrectly implemented. It is difficult to determine the area of any change, the original purpose of various mechanisms found in the code often no one remembers.
And the microservices?
Microservices deal with the above problems very well, but the question of complications remains – will a solution with a few hundreds or thousands of separate modules be transparent? Well, not necessarily. The structure of each module may be clear, but the links between them don't have to be so. On a larger scale it is a significant cognitive load. So is there anything we can do about it? It depends on the actual system architecture.
The architecture
If the architecture is completely flat (the system as a set of equal services), looking at it, we will not get clear information about which components are used for what and where their responsibility lies. In this case, quite a detailed (and exhaustive) analysis would be necessary.
The situation will change for the better if we introduce some sort of hierarchy within the set of services. Separating certain functional areas, perhaps in the form of system layers, and assigning individual microservices to them will allow to define special system characteristics on a larger scale. Depending on the area, services may be subject to some dedicated requirements, which on the one hand will limit them, but on the other hand will greatly facilitate the analysis of the whole solution. We recommend this approach.
This framework imposed on the solution in such areas as security, logs, configuration, data validation, error handling, monitoring are, in our opinion, simply necessary. In this way, we reduce the risk of omitting some of the cross-cutting requirements, or implementing them in a different way.
Communication between services
Another issue that affects the transparency of the solution is the way modules communicate. The easiest to understand are interactions based on synchronous calls and the central orchiestrating component. This approach, at the communication level, will be the most transparent, but unfortunately it also has a number of drawbacks, which we mentioned in the second part of the article, in particular low resistance to failures and overload.
Other connection variants, based on asynchronous communication, are better in terms of resilience, in static perspective, they do not worsen the transparency, but the analysis of the processing is becoming more complicated.
Queuing or event-based systems, especially without a separate orchestrator, unfortunately mean a more complicated communication architecture. Of course, this type of cooperation has its advantages, mainly due to the loose coupling of modules, but it must be remembered that the transparency of the solution in this model is not its strong point.
Orchestration vs Choreography
To be considered, even in the queue model, is the use of an orchestrator who will coordinate the business processes. In the choreographic model, the business flow results from autonomous, independent actions of individual modules and is difficult to follow. The actual business process remains de facto hidden. However, if we decide on a choreographic paradigm – it has its advantages – dedicated tools will be very useful, which on the basis of aggregation of the traces of processing in different components, will present the entire workflow in one place.
Documentation
In order to maintain the ability to change in an evolving system, detailed documentation of both the architecture itself and business processing models is basically essential. Regardless of the method of communication.
The documentation should describe the individual layers or areas of the system, their responsibilities, the characteristics of the individual modules, together with the functions provided, the APIs of these functions, including data formats, as well as the entire workflows combining the processing in the different modules.
Apart from the documentation itself, it is important whether the actual system is compliant with it – that is, for example, it is advisable to use validation of messages exchanged between components, the verification of the quality of data processed and stored in the system, architecture monitoring tools, etc.
Flexibility to replace components
The great advantage of microservices is that the individual components can be replaced completely independently. This possibility depends, admittedly, on the quality of documentation, but if its interface is maintained, a given module can completely replace its implementation, without any changes in other components.
On the other hand, modification of the interface of a given module is possible in a smooth way, based on versioning. We can have different versions of a given service running simultaneously, supporting the old and the new interface. However, we recommend that the different versions of the service should be maintained only temporarily, during the process of launching the changes. In the long run, many different variants of the same service increase maintenance costs, complicate the solution and make modifications more difficult.
The possibility of using different technologies
The possibilities for change go further. In microservices it is very easy to change the technology (or framework) of a given service. This obviously increases the ability to change of the whole solution. We can use new technologies, best suited to the solution at the moment.
This is a particularly big advantage compared to the monolith, where the decision about the framework often means a marriage for better or for worse, without the possibility to change, even when the given technology becomes obsolete.
The possibility of using different technologies in microservices is certainly a great value. However, one has to remember not to abuse it. Too much fragmentation of technology increases the cost of maintenance and again, makes changes in the system more difficult, not easier.
Ability to deliver software frequently and quickly
Let's assume that we already have this desired ease of introducing changes in our microservice solution and are able to implement them quickly. Then it is important to be ready to deliver these changes frequently and quickly to production.
Testability
Before it can be delivered, we should test the change. Each of the separate microservices, due to their limited size, is certainly relatively easy to test internally. Keeping a documented API is also not difficult to determine to what extent, from an external perspective, a given function works correctly. As a result, in our opinion, microservices can be described as highly testable architecture.
However, system and integration tests may be a problem. Knowledge about entire business processes may be dispersed. It is important, we keep coming back to it, to ensure the quality and completeness of documentation. This facilitates preparation of such tests. However, even with a high quality documentation, tests verifying the full course of a business process can be difficult.
Nevertheless, we should not give up on them. Although the independence of individual modules minimizes the risk, that an error in the single component will result in the unavailability of the entire solution, the incorrect flow of a given business process still remains a big problem.
Thanks to the independence of microservices, the preparation of deployments is significantly shortened. There is no need for extensive arrangements, one team, taking care of a given small microservice, is responsible for the entire process (from code approval through testing, quality assessment to the provisioning of deployment artifacts).
Individual changes can also be independently delivered and run on production. They are usually small, so they can be very frequent. Thanks to the possibility of maintaining several versions of a given service in parallel, the launching process itself is also light and does not require interruptions in the availability of services to the customer.
A certain disadvantage, or rather a consequence of the fact, that we have the possibility of a large number, even parallel deployments, is that the use of dedicated CI/CD automation tools is basically necessary. This also involves investment in team competences as well as maintenance and construction of that process.
The monolith alternative makes it very difficult and slows down the delivery of changes to production. In order to provide fixes in one component it is necessary to build and deliver the whole application. Production start-up requires stopping and often a long-term restart of the whole solution. This is difficult, so such changes are introduced less often. And that additionally increases the risk of changes, because the modifications are in effect deeper and more difficult to withdraw from.
Easy production maintenance
Mature microservice applications are complicated. This is unavoidable due to the number of components and the relationships between them. Therefore, they will definitely not be completely easy to maintain. How to deal in such an environment?
Control over business and technical processes
The first issue is the ability to effectively diagnose various error situations and analyze the compliance of processing with the intended business process. Also, it is necessary here to automate and use appropriate tools for monitoring, tracing, log aggregation or performance analysis (such as ELK / EFK, Prometheus, Grafana, Jaeger and others).
As we mentioned, depending on how the modules communicate and whether the orchestrator is used, the degree of complexity of business processes can vary greatly. In any case, good documentation is particularly important.
In terms of maintenance, microservices have their advantages, such as unnoticeable restarts without any unavailability, but a large number of system components, scattered on many machines in the infrastructure, presents a great challenge in terms of production maintenance, even when using appropriate tools. Be aware of this.
Fault tolerance
Another important issue in production maintenance is fault tolerance. In this field, microservices perform very well thanks to two characteristics resulting from the replication and independence of the modules: the lack of a single point of failure and fault isolation. Of course, both of these features are in sharp contrast to the monolith, where the erroneous behavior of one module can make the entire application unavailable.
The degree of fault tolerance of the microservice architecture may differ. It depends on how you communicate between services; asynchronous, queuing and event stream perform very well here, the synchronous model with the orchestrator a bit worse.
There are many design patterns that increase the resilience of a solution (also for the synchronous model), such as Circuit Breaker, Bulkhead, Retries, Failover, Failback. It is particularly important to protect against overloading of various system components, i.e. using timeouts, limiting the pool of threads and queue capacity, or using the so-called backpressure – i.e. a mechanism for transferring information about overload to the initiator.
The distributed nature of the microservice environment means, however, that failures can be more frequent – the network itself is a barrier, and network connections can be unreliable. It is important that despite single failures, the solution as a whole can and should work properly.
Efficient use of computing resources
We will try to look here at two aspects of the efficiency. The first one is about the overheads that a given architecture model introduces into the solution. The second is the ability to scale, i.e. the effectiveness of handling increasing traffic by adding computing resources.
The distributed nature of the solution, together with independent, isolated modules means a lot of network transmissions and the accompanying serialization / deserialization of transmitted data. This creates a considerable overhead, especially in relation to the monolith, where all modules can fit in a single process.
The cost of network transmission can be minimized by prioritizing the location of different components on the same physical machines. Serialization / deserialization costs can be reduced by using effective data representation methods – e.g. instead of XML / JSON representation we can use a binary data protocol (ProtoBuf, or even more efficient – Cap’N’Proto or Flatbuffers). However, the advantage of the monolith in this matter will remain a fact.
The second advantage of the monolith may also be the framework overhead. It depends on the technology, but for example JVM has its memory requirements, which need to be multiplied by the number of components, which in the microservice architecture will undoubtedly be higher.
Both of these issues are worth considering, but let's remember that in most applications the performance cost of actual processing is much higher than the cost of transmission. Similarly, when it comes to memory requirements.
Scalability
Very high scalability is one of the most important values of microservices. Small, independent components can be easily replicated, adapting the solution to current needs, even in real time. In this respect, microservices perform much better than a monolith, and in today's fast-changing world, the ability to easily scale is more important than pure performance.
In a monolith, scaling is not effective. We can scale up, switching to more powerful machines, but of course the possibility of such scaling ends quickly. We can scale horizontally, running more instances of our entire application, but it’s a significant cost and all traffic goes to one database anyway. In addition, different components have different needs, for some CPU is the limit, for others memory. In monolithic architecture, we cannot scale each module separately, and therefore we do not use available resources efficiently.
Microservices have an advantage here. Small components allow you to adjust the number of instances for each of them as needed. We don’t have to run all functionality in such a number of copies that is required by the most demanding component. Individual modules are also lightweight, easier to start and stop. Thanks to this, dynamic scaling is more effective and easier to implement.
Microservices are also better suited for connecting to new, distributed and replicated databases. They allow you to simply divide (using sharding) the responsibility of given module instances, according to the scope of handled data (e.g. according to the customer ID key). It also gives us high scaling possibilities in the dimension of supported data.
Matching the requirements of the cloud
The use of cloud services is today a standard model for the work of IT teams. How do microservices look in this area? It depends on the type of service. In the SaaS model, the recipient does not really see the implementation of the given business functions. Serverless or PaaS models impose a solution platform, as a client we are not able to decide on the architecture of the solution. In the IaaS model, we get a ready-made virtual machine, on which we can install any software, similarly to our own server room, but we do not fully use the advantages of the cloud.
Cloud Native is the fastest growing model of cloud services today, based on the container orchestration platform (like Kubernetes). Microservices are part of this approach, due to their architecture, oriented around the cooperation of loosely coupled, independent components that can be easily managed and scaled in a distributed environment.
In addition, the mechanisms used in the microservice architecture, related to the automation of component delivery and deployment processes (CI/CD) are perfectly suited for use in the cloud.
Microservices also benefit from the cloud – they allow for quick deployment, dynamic allocation of resources, support solution resilience and allow, in principle, unlimited scaling. To sum up, microservices and native cloud are a great combination.
Is that all?
The above objectives appear most fequenty in discussions regarding the introduction of microservices. We would like to mention two other very important aspects.
Security
A crucial requirement for the IT department in any organization is the security of solutions and data. The distributed architecture of microservices is a big challenge here. Individual services communicate with each other over the network. On the server side, it is essential to verify service calls, i.e. control which component and on whose behalf submitted the given request.
Different mechanisms are used for this – e.g. digital signatures or authorization tokens. Unfortunately, each of these solutions carries a performance cost. You need to balance the expected security and necessary performance of the solution.
Data security is another issue. When it comes to customer data, care for it is currently a regulatory requirement under the GDPR directive. In this regard, anonymisation or pseudonymisation of data may be necessary. Depending on the industry, the regulator may also require additional mechanisms, such as Data In Transit or Data At Rest encryption. This again, unfortunately, means an performance overhead, but is sometimes necessary.
In a microservice solution, these challenges are particularly serious due to the independence of individual modules and separate (often duplicated) data. An additional level of complication appears when using cloud solutions from an external provider. The topic of microservice security issues is very broad and deserves a separate article.
Costs
Costs are always a very important issue related to the maintenance of the solution. Are microservices attractive in terms of cost? They certainly require considerable initial investments. If we create a small solution, it is hard to justify these expenditures. Unless we are building our solution with iterative development in the long run in mind. In this case, a higher initial cost pays off in future facilitated changes, in delivery and maintenance efficiency. At the beginning, however, the cost of building and maintaining the microservice architecture will be significant.
The second scenario justifying the use of microservices, already at the beginning of the development of our system, is the intention to use cloud services. The use of the cloud may have different motives, but one of them is precisely the desire to reduce costs. And because the microservices fit perfectly into the cloud, adopting this architecture makes a lot of sense.
In addition, in the context of costs, it is worth bearing in mind that a native cloud solution based on the Kubernetesorchestration platform is currently offered by all major cloud service providers. It is also possible to run it under a private cloud (e.g. using the OpenShift solution). The possibility of choice means that the offers are quite competitive here.
What's more, it is possible to combine in one system (on the same platform – as part of the so-called Hybrid cloud) many distributed services, some of instances run on the own cloud, some other on the cloud of different suppliers. So we have great flexibility, allowing for cost-effective adaptation of the solution to the possibilities and requirements.
Summary
Perhaps microservices do not allow for a very rapid development of the system from scratch. They require considerable initial investments. However, their development does not have to slow down with the increasing complexity of the system. If we keep the solution transparent, even with a complicated application it is easy to add new functionalities, and many programmers can, independently of each other, work efficiently on the solution.
With the use of appropriate automation mechanisms, the process of preparing and delivering production solutions can be very fast and effective. Production maintenance itself, due to its level of complexity, will not be easy, but, thanks to the support of dedicated tools, can be organized at a high level.
Microservices have their communication overhead, it can be limited to some extent, but great static and dynamic scaling capabilities are more important. This allows for efficient, stable and resilient operation of the whole solution. Such characteristics also make the microservice architecture ideally suited to cloud services, in particular Cloud Native.
It is necessary to maintain an understandable architecture, a clear definition of the tasks of individual microservices and consistent semantics in the field of APIs provided by individual modules. Such an approach will allow to maintain high changeability and facilitate maintenance.
It is also important to keep the balance between the independence of the teams taking care of individual microservices and maintaining the consistency of the solution, including the implementation of security principles and other cross-cutting concerns.
Microservices have their cost, so is it worth introducing this architecture? In our opinion, for larger systems, especially cloud-based ones, it is definitely worth it, but you need to be able to adapt the solution to your needs, not focus on introducing microservices as the goal of the entire undertaking.