Getting started with Microservices (1 of 3)

By: Michael Ruxsaksrikul, Cloud Engineer

This blog is the first in a three-part series on Microservices, expanding on common components, architecture strategy, application decomposition, advanced scaling strategies, branching, and deployment strategies. We’ll also touch on leveraging continuous integration and continuous deployments (CICD) as well as orchestration/pipelines per the DevOps model.

In this post, we’ll cover the basics of what a Microservice architecture is, when we might use it, and why. Read our previous blog posts regarding Amazon’s Application and Network Load Balancers. They are a key component in order to move to a Microservices architecture. Containerization is also a common component found in many Microservice architectures. You can head over to George Rolston’s blog on Elastic Container Service (ECS) for our take on one of Amazon’s container service offerings.

Microservices Architecture, or alternatively just Microservices, is difficult to understand until we learn where it came from. The traditional strategy of Monolithic application architectures has been the gold standard for the past two decades. This was based on the emerging standard of Service Oriented Architecture (SOA) at that time. An example of this is seen in our sample JHC blog application diagram (Figure 1).

Figure 1

Our sample application is comprised of five functionalities: Authentication, user data, submission and approval, configuration and content management, and publishing. Here’s how this model might look in traditional architecture (Figure 2). We see distribution of work load via the load balancer to a three-instance scaling group for our blog application. Let’s look at how our architecture and application can be affected in a few scenarios.

Figure 2

Scenario 1 – Resource Usage

Our application receives a huge spike of activity from blog content developers who are creating and editing blog content as part of news recently released at AWS re:Invent. Instance 1, 2, and 3 are being heavily utilized. The publishing team members and management are not able to authenticate to approve new blogs, due to the server resource contention. Our scaling policy kicks in and launches another instance to accommodate the demand, and now all is well. Or is it?

Analysis

There is no question this strategy works but it isn’t the most efficient use of resources. The architecture can create redundant and underutilized compute and memory resources. This can occur when the other components of the application aren’t being utilized on the new instance, primarily because it is fielding the surge of user authentication requests. The inefficiency here is that our application must scale out an entire server instance (infrastructure) and another instance of the application stack (application architecture) to address the surge in authentication requests.

Scenario 2 – Bug Fix

After the publishing team begins approving some of the new and modified blog content they run into a bug that is now causing the application to crash! The development team reviewed the error and identified a fix for the approval process code. The fix will require a new deployment of the entire application. This deployment process normally takes 4 to 6 hours, and delays additional blog updates.

Analysis

The deployment duration is specifically tied to the fact the application’s functions are coupled together and presented as one service. The expedition of bug fixes is impacted as part of the tight coupling of the application’s components. This inefficiency affects deployment time related to bug/emergency fixes.

Scenario 3 – New Feature Set

The blog activity has continued and new readers are following the blog daily. Feedback from the readers has reached management. Readers would like to be able ask questions and provide feedback on the blog in a forum style. The executives asked the development team to create this new feature and release it before the annual code freeze for the holidays. The team encounters integration issues with the other components of the application after deployment in pre-production environments. New code had to be written to address the issue, but the release window was missed.

Analysis

Factors of deployment duration, integration, testing and user acceptance testing can have a drastic effect on the release window in monolithic application architecture. The unexpected integration issues caused schedule creep and the team was not able to have the code ready for the next release. This inefficiency affects the delivery of new feature sets in an agile manner.

Now let’s see how Microservice architecture can be of benefit to our application! Before we touch on the dynamics of microservices in the above scenarios, let’s see what is considered a microservice in the first place. The characteristics of a microservice service-based application architecture are:

• A collection of small, autonomous self-contained services built to deliver a single business functionality/capability.

• The services are launched and scaled independently.

• There is a common and agreed standard form of communication between services (commonly lightweight APIs).

• The development and lifecycle of the service is managed by smaller teams and can be an independent codebase or framework from other applications/services.

Let’s see how our application might look in a microservices form (Figure 3). After refactoring the application it now consists of a web front user interface (UI) as the entry way for the blog. The various functionalities are restructured as several decoupled service components. A User Service to handle all user related functions. A submission and configuration/content service. A publishing service to handle publishing activities to the blog web site. Let’s revisit the previous scenarios with our new design:

Figure 3

Scenario 1 (Microservice Architecture) – Resource Usage

Our application receives a huge spike of activity again, but this time users are not affected. The User Service is scaled independently from the rest of the application. We no longer experience users not being able to authenticate and login. We also maximize the server resource utilization, as a smaller instance size is used and scaled to meet the actual demand for that function.

Scenario 2 (Microservice Architecture) – Bug Fix

The publishing team runs into another bug and the development team produces a fix. This time instead of the 4-6 hours to deploy an entire application, they only deploy the Publishing Service and it takes 30 minutes.

Scenario 3 (Microservice Architecture) – New Feature Set

The executives are revisiting the need for blog readers to interact on the blog. They would like readers of the blog to authenticate with their social media account to engage on the blog site. They ask the User Service development team to add a new feature set to federate authentication with all the major social media APIs. They would also like the Blog User Interface development team to create a new feature set to provide a forum style interface for these new interactions. The User Service team delivers their updates in the development environment in parallel to the development activities of the UI team. The UI team also delivers its updates to the development environment and begins the integration testing. Integration testing is successful and both features sets are deployed on schedule.

This time, we covered the difference between monolithic and microservice application architecture. We also reviewed a few different scenarios with both architectures and some reviewed some history on service-oriented architecture. Keep an eye out for the next two blog posts on Microservices, where we cover decomposing strategies for turning a monolithic application into modular services and advanced scaling activities and DevOps deployment strategies for Microservices. Until then Happy Clouding!

If you or your organization has more questions in regards to Microservices, reach out to sales@jhctechnology.com to set up some time for a chat on your thoughts.

Michael Ruxsaksrikul is a Cloud Engineer at JHC Technology. Please connect with Michael at
mruxsaksrikul@jhctechnology.com or through LinkedIn.

Certifications

© JHC Technology, Inc.
DUNS: 961809790 | CAGE Code: 5YRC8 | NAICS Codes: 423430, 518210, 541511, 541512, 541513, 541519, 541990