Logging in Microservice Architecture
Photo by Mohammad Rahmani on Unsplash

Logging in Microservice Architecture

Debugging is an essential part of application development, and it stands true in case of Microservice architectures as well. The following article stems from the experience of the Alumnus team, as they try to develop an e-health application.

Key Technologies:

Node and Express Js, AWS Cloudwatch, Opensearch and S3, winston and winston-aws-cloudwatch node modules

Introduction:

Complexity increases with the rise in the number of services in a Microservice architecture. Bugs and individual service failures can be tricky to deal with.?Digging deep for hours to find the root of an unnamed error can be a daunting and unproductive task.

To effectively deal with the challenges of system errors, request chain breakdowns, or even simply to stay on top of the system architecture, logging is a vital tool. It is undeniably the cornerstone of maintaining and debugging one’s app efficiently.

The easy-to-use console.log () is not that useful when debugging an issue in a sizeable Microservice architecture. Firstly, it is virtually impossible to centralize the logging mechanism if it’s being handled by the console. Carrying forward the idea of a central log file/database, it’s also advantageous if one can easily attach as much information as needed to any log instance. This is not achievable through console logging and Awkward parsing is needed for it.

Logs that have been aggregated together generate immense value due to their centralized nature. What if one would like to keyword search for log instances? What if one likes to format their log data to be meaningful, flawlessly readable, and easily digestible? When dealing with scattered logs, this is simply too tricky to achieve.

What Is Centralized Logging?

A centralized logging system is a type of logging solution designed to collect logs from multiple servers consolidate the data, present a unified view of the logs, and allow analysis and extraction of insights easily. Some of the benefits it provides include the following:

  • Storing log data from multiple sources in a central location.
  • Enforcing retention policies on the logs so they are available for a specific time period.
  • Easily searching inside the logs for important information.
  • Generating alerts based on metrics one defines in the logs.
  • Sharing one’s dashboard and log information with others simply and quickly.
  • Low costs and increased storage and backup for historical data.
  • Setting up security alerts and granting login access to particular users without granting server root access.

This article will break down the best practices, including a step-by-step approach to injecting a fantastic logging mechanism into a Node.js/ Express.js application. Additionally, it will also present a walkthrough of the process of setting up a logging solution based on Elasticsearch and Kibana. Lastly, it will cover the process of archiving logs to S3 bucket.

Process Flow:

  • Individual services will generate logs using winston and transport to AWS CloudWatch using winston-aws-cloudwatch.
  • Cls-rtracer will be used for generating unique ID for logs for every request.
  • CloudWatch will Stream the logs to AWS OpenSearch (Successor to Elastic Search Service).
  • Logs will be viewed and queried using openSearch Dashboard (Kibana).
  • Archive log to S3 each day from CloudWatch.

No alt text provided for this image

Detailed Steps for Setting up Distributed Logging

Setup Node Project using npm init and install express server using npm install express –save.

1. Setup Logger

Install other dependencies using npm install --save winston winston-aws-cloudwatch cls-rtracer

  • Make a file logging.js and paste the code below: -

const { createLogger, format, transports } = require("winston"),

? ? CloudWatchTransport = require('winston-aws-cloudwatch'),

? ? rTracer = require('cls-rtracer');
?

const { combine, timestamp, printf } = format;

var NODE_ENV = process.env.NODE_ENV || 'development';
                                                            const { createLogger, format, transports } = require("winston"),

? ? CloudWatchTransport = require('winston-aws-cloudwatch'),

? ? rTracer = require('cls-rtracer');

?

const { combine, timestamp, printf } = format;

?

var NODE_ENV = process.env.NODE_ENV || 'development';

?

//creating the logger

const logger = createLogger({

? format: combine(

? ? timestamp(),

? ? rTracerFormat

? ),

? transports: [

? ? new transports.Console({

? ? ? timestamp: true,

? ? ? colorize: true,

? ? })

? ]

});

?

// AWS cloudwatch config for transporting logs to cloudwatch

var config = {

? logGroupName: 'my-log-group',

? logStreamName: NODE_ENV,

? createLogGroup: true,

? createLogStream: true,

? awsConfig: {

? ? accessKeyId: "YOUR_ACCESS_KEY_ID",

? ? secretAccessKey: "YOUR_SECRET_ACCESS_KEY",

? ? region: "AWS_REGION"

? },

? formatLog: function (item) {

? ? let reqId = rTracer.id();

? ? let reqIdText = reqId? ': [request-id: ' + reqId + ']' : '';

? ? return item.meta.timestamp + ' ' + item.level + reqIdText + ': ' + item.message + ' ' + JSON.stringfy(item.meta)

? }

}

?

// adding cloudwatch Transport to logger

logger.add(new CloudWatchTransport(config));

?

logger.level = process.env.LOG_LEVEL || "silly";

?

logger.stream = {

? write: function(message, encoding) {

? ? logger.info(message);

? }

};

exports.logger = logger;
        

2.?Setup CloudWatch

  • Go to CloudWatch > Logs > Log Groups then click on Create Log Group (on top right side)
  • Enter Log Group Name my-log-group (same as logGroupName in the config of above code)
  • Set Retention Settings to 3 days (keeping logs to cloud watch is expensive)
  • Then click on Create.

Now the localhost Server should be run using node server.js

Go to Browser and open localhost://3000

Logs will be displayed on backend console.

One can also see the Logs in CloudWatch inside my-log-group (created before).

No alt text provided for this image

Here the logs can be seen with details provided in above code. From above logs, one can know the timestamp, logs type, request id, service name, page name, line number and message.

3. Setup OpenSearch

  • Go to Amazon OpenSearch Service (Successor to Amazon Elasticsearch Service).
  • Click on Create Domain
  • Enter Domain Name application-logs
  • Select Development and Testing in Deployment Type
  • Select Latest Version
  • Disable Auto Tune

In Data Nodes section:

  1. Select Instance Type t3.medium.search
  2. Set Number of Nodes 1
  3. Set Storage 10GB

  • In Network section, Select public access

In Fine-grained access control section:

  1. Enable fine-grained access control
  2. Select create master user
  3. Enter username and password (remember this username password, it will be needed to access Kibana dashboard)
  4. Leave other parts as it is and click on Create

It will take few minutes to Create Elastic Search domain.

4. Setup IAM Policy

  • Open IAM in AWS > Policies > Create Policy > click on JSON tab and then Paste the policies given below:

{
? ? "Version": "2012-10-17",

? ? "Statement": [

? ? ? ? {

? ? ? ? ? ? "Action": [

? ? ? ? ? ? ? ? "es:*"

? ? ? ? ? ? ],

? ? ? ? ? ? "Effect": "Allow",

? ? ? ? ? ? "Resource": "arn:aws:es:<AWS-region>:<account-id>:domain/application-log/*"

? ? ? ? },

? ? ? ? {

? ? ? ? ? ? "Effect": "Allow",

? ? ? ? ? ? "Action": "logs:CreateLogGroup",

? ? ? ? ? ? "Resource": "arn:aws:logs:<AWS-region>:<account-id>:*"

? ? ? ? },

? ? ? ? {

? ? ? ? ? ? "Effect": "Allow",

? ? ? ? ? ? "Action": [

? ? ? ? ? ? ? ? "logs:CreateLogStream",

? ? ? ? ? ? ? ? "logs:PutLogEvents"

? ? ? ? ? ? ],

? ? ? ? ? ? "Resource": [

? ? ? ? ? ? ? ? "arn:aws:logs:<AWS-region>:<account-id>:log-group:/aws/lambda/LogsToElasticsearch_application-log:*"

? ? ? ? ? ? ]

? ? ? ? }

? ? ]

}        

  • Then Next: Tags > Next: Review
  • Enter Name application-log-policy
  • Create Policy.

5. OpenSearch Service Subscription

  • Go to CloudWatch > Logs > Log groups > Open my-log-group
  • Click on Action (on Right Top) > Subscription filters > Create Amazon OpenSearch Service subscription filter
  • Select application-log in Amazon OpenSearch Service Cluster.
  • Select application-log-policy in Lambda IAM Execution Role.
  • Select JSON in Log Format.
  • Enter None in Subscription filter name
  • Select development in log data to test.
  • Click on Start streaming.

6. Update Lambda Function

  • Go to AWS Lambda
  • Open LogsToElasticsearch_application-log lambda function.
  • Create .env file inside lambda.
  • Put the value for
  • AWS_SECRET_ACCESS_KEY = “Your-secret-key"
  • AWS_ACCESS_KEY_ID = “Your-secret-key-id"
  • Then click on Deploy.
  • Run the localhost Server using node server.js
  • Go to Browser and open localhost://3000
  • Now generated Logs will be automatically sync to Open Search.

7. Setup OpenSearch Dashboard (Kibana)

  • Open Amazon OpenSearch Service > application-log > click on OpenSearch Dashboard URL
  • Enter username and password. (Given at the time of OpenSearch Service Creation)
  • Click on Visualize on left menu > Create Index pattern
  • Enter Index pattern Name cwl-* > Next Steps > Select @timestamp in Time Field > Create Index pattern
  • Select Query Workbench on left menu > Run
  • All tables/indexes will be visible below
  • CloudWatch will create index name like cwl-YYYY.MM.DD
  • Write query SELECT * FROM <index-name> and then Run.

One can see all the logs in the below tables.

Export AWS CloudWatch Logs to S3

Log retention in CloudWatch is very expensive, which is why it would be cost-effective to move the old logs to the S3 bucket.

Previously, the log retention period was set to 3 days/ 5 days. After this period logs will be automatically removed from the CloudWatch logs.

Here, a bucket needs to be set up and configured with an automatic process that will move the logs to bucket every day.

8. Setup S3 bucket

  • Go to S3 > Create bucket
  • Enter bucket name application-log-archive
  • Select the AWS Region (Region should be same as CloudWatch has)
  • Uncheck/disable Block all public access.
  • Checked the acknowledge in Block public Access settings.
  • Enable Bucket Versioning
  • Then Create bucket
  • Open the created bucket (application-log-archive)
  • Go to Permissions > Edit bucket Policy and paste the policies there, then Save changes.

{
? ? "Version": "2012-10-17",

? ? "Statement": [

? ? ? ? {

? ? ? ? ? ? "Effect": "Allow",

? ? ? ? ? ? "Principal": {

? ? ? ? ? ? ? ? "Service": "logs.<AWS-region>.amazonaws.com"

? ? ? ? ? ? },

? ? ? ? ? ? "Action": "s3:GetBucketAcl",

? ? ? ? ? ? "Resource": "arn:aws:s3:::<bucket-name>"

? ? ? ? },

? ? ? ? {

? ? ? ? ? ? "Effect": "Allow",

? ? ? ? ? ? "Principal": {

? ? ? ? ? ? ? ? "Service": "logs.<AWS-region>.amazonaws.com"

? ? ? ? ? ? },

? ? ? ? ? ? "Action": "s3:PutObject",

? ? ? ? ? ? "Resource": "arn:aws:s3:::<bucket-name>/application-log/*",

? ? ? ? ? ? "Condition": {

? ? ? ? ? ? ? ? "StringEquals": {

? ? ? ? ? ? ? ? ? ? "s3:x-amz-acl": "bucket-owner-full-control"

? ? ? ? ? ? ? ? }

? ? ? ? ? ? }

? ? ? ? }

? ? ]

}?        

9. Link S3 bucket to CloudWatch

  • Go to CloudWatch > Logs > Log Groups > open my-log-group
  • Click on Action (on Top Right) > Export Data to Amazon S3.
  • Enter Date Range
  • Select S3 bucket application-log-archive > Export

This is the manual process of archiving logs to S3.

In the next step, the Automatic Process of archiving logs to S3 will be set up using lambda and EventBridge.

10. Setup lambda for archiving logs

  • Go to Lambda > Create function
  • Enter Function Name application-log-export-to-s3
  • Select latest Node Version
  • In Execution Role, Select Use an existing role
  • In Existing Role, Select application-log-policy
  • Create Function
  • Open application-log-export-to-s3 lambda function and paste the code inside lambda function and click on deploy.

?

const AWS = require('aws-sdk')

const cloudConfig = {

? apiVersion: '2014-03-28',

? region: 'AWS-REGION', // replace with your region

? accessKeyId: "YOUR_ACCESS_KEY_ID",

? secretAccessKey: "YOUR_SECRET_ACCESS_KEY",

}

const cloudWatchLogs = new AWS.CloudWatchLogs(cloudConfig);

?

exports.handler = ?async (event, context) => {

? ? const params = {

? ? ? ? destination: 'application-log-archive', // replace with your bucket name

? ? ? ? from: new Date().getTime() - 8640000,

? ? ? ? logGroupName: 'my-log-group', ? // replace with your cloudwatch log's group name

? ? ? ? to: new Date().getTime(),

? ? ? ? destinationPrefix: 'application-log', // replace with random string used to give permission on S3 bucket

? ? };

? ? await cloudWatchLogs.createExportTask(params).promise().then((data) => {

? ? ? ? console.log(data)

? ? ? ? return ({

? ? ? ? statusCode: 200,

? ? ? ? ? ? body: data,

? ? ? ? });

? ? }).catch((err) => {

? ? ? ? console.error(err)

? ? ? ? return ({

? ? ? ? statusCode: 501,

? ? ? ? ? ? body: err,

? ? ? ? });

? ? });

}

?        

  • Go to Function Overview of current lambda function
  • Click on Add Trigger > Select EventBridge > Create a new rule
  • Enter rule name export-logs
  • Select Schedule expression in Rule type
  • Enter Rate (1 day) in Schedule expression
  • Click add

This Trigger function, will trigger the application-log-export-to-s3 lambda function every day and logs will be automatically archived to the newly created s3 bucket every day.

Note: One can export the logs to Either Amazon OpenSearch or in AWS S3 bucket or one can do both the processes depending upon the requirements. Debugging logs in Amazon OpenSearch is easier than S3, but it is little bit costly than AWS S3 bucket.

AWS X-RAY

AWS X-Ray makes it easy for developers to analyze the behavior of their distributed applications with end-to-end tracing capabilities. One can use X-Ray to identify performance bottlenecks, edge case errors, and other hard to detect issues. X-Ray supports applications, either in development or in production, of any type or size, from simple asynchronous event calls and three-tier web applications to complex distributed applications built using a MicroServices architecture. This enables developers to quickly find and address problems in their applications and improve the experience for end users of their applications. Following are the features of X-RAY: -

  • Simple Setup
  • Debugging
  • End-to-end-tracing
  • Service map
  • Server and client-side latency detection
  • Data annotation and filtering
  • Console and programmatic access
  • Security

AWS X-RAY is integrated with CloudWatch so that one can view logs, metrics, and traces in one place.


Authors

Soumyadipta De

Rabi Jaiswal


Dipak Chavda

Developer / Tech Lead @ HitachiVantara | Node.js | Javascript | Typescript | Serverless | React | AWS | ELKB | NoSql | Redis | Mqtt | MySql | PostgreSQL | Docker | Microservice Architect | Performance Optimization

2 年

Article really informative an guide to many developers. But have a question that why do we need a more wrapper of module winston-aws-cloudwatch? You can achieve same thing with winston + aws-sdk?

回复
Deeptendu D.

Distinguished Member Of Technical Staff at Alumnus Software Limited

2 年

Well, microservices provide a lot of gain but a lot of pain too. You need to have the right tools and tricks to stay on top.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了