Logging in Microservice Architecture
Alumnus Software Limited
Alumnus' expertise in IoT stands on the three pillars of Embedded, Networking and Applications software.
Debugging is an essential part of application development, and it stands true in case of Microservice architectures as well. The following article stems from the experience of the Alumnus team, as they try to develop an e-health application.
Key Technologies:
Node and Express Js, AWS Cloudwatch, Opensearch and S3, winston and winston-aws-cloudwatch node modules
Introduction:
Complexity increases with the rise in the number of services in a Microservice architecture. Bugs and individual service failures can be tricky to deal with.?Digging deep for hours to find the root of an unnamed error can be a daunting and unproductive task.
To effectively deal with the challenges of system errors, request chain breakdowns, or even simply to stay on top of the system architecture, logging is a vital tool. It is undeniably the cornerstone of maintaining and debugging one’s app efficiently.
The easy-to-use console.log () is not that useful when debugging an issue in a sizeable Microservice architecture. Firstly, it is virtually impossible to centralize the logging mechanism if it’s being handled by the console. Carrying forward the idea of a central log file/database, it’s also advantageous if one can easily attach as much information as needed to any log instance. This is not achievable through console logging and Awkward parsing is needed for it.
Logs that have been aggregated together generate immense value due to their centralized nature. What if one would like to keyword search for log instances? What if one likes to format their log data to be meaningful, flawlessly readable, and easily digestible? When dealing with scattered logs, this is simply too tricky to achieve.
What Is Centralized Logging?
A centralized logging system is a type of logging solution designed to collect logs from multiple servers consolidate the data, present a unified view of the logs, and allow analysis and extraction of insights easily. Some of the benefits it provides include the following:
This article will break down the best practices, including a step-by-step approach to injecting a fantastic logging mechanism into a Node.js/ Express.js application. Additionally, it will also present a walkthrough of the process of setting up a logging solution based on Elasticsearch and Kibana. Lastly, it will cover the process of archiving logs to S3 bucket.
Process Flow:
Detailed Steps for Setting up Distributed Logging
Setup Node Project using npm init and install express server using npm install express –save.
1. Setup Logger
Install other dependencies using npm install --save winston winston-aws-cloudwatch cls-rtracer
const { createLogger, format, transports } = require("winston"),
? ? CloudWatchTransport = require('winston-aws-cloudwatch'),
? ? rTracer = require('cls-rtracer');
?
const { combine, timestamp, printf } = format;
var NODE_ENV = process.env.NODE_ENV || 'development';
const { createLogger, format, transports } = require("winston"),
? ? CloudWatchTransport = require('winston-aws-cloudwatch'),
? ? rTracer = require('cls-rtracer');
?
const { combine, timestamp, printf } = format;
?
var NODE_ENV = process.env.NODE_ENV || 'development';
?
//creating the logger
const logger = createLogger({
? format: combine(
? ? timestamp(),
? ? rTracerFormat
? ),
? transports: [
? ? new transports.Console({
? ? ? timestamp: true,
? ? ? colorize: true,
? ? })
? ]
});
?
// AWS cloudwatch config for transporting logs to cloudwatch
var config = {
? logGroupName: 'my-log-group',
? logStreamName: NODE_ENV,
? createLogGroup: true,
? createLogStream: true,
? awsConfig: {
? ? accessKeyId: "YOUR_ACCESS_KEY_ID",
? ? secretAccessKey: "YOUR_SECRET_ACCESS_KEY",
? ? region: "AWS_REGION"
? },
? formatLog: function (item) {
? ? let reqId = rTracer.id();
? ? let reqIdText = reqId? ': [request-id: ' + reqId + ']' : '';
? ? return item.meta.timestamp + ' ' + item.level + reqIdText + ': ' + item.message + ' ' + JSON.stringfy(item.meta)
? }
}
?
// adding cloudwatch Transport to logger
logger.add(new CloudWatchTransport(config));
?
logger.level = process.env.LOG_LEVEL || "silly";
?
logger.stream = {
? write: function(message, encoding) {
? ? logger.info(message);
? }
};
exports.logger = logger;
2.?Setup CloudWatch
Now the localhost Server should be run using node server.js
Go to Browser and open localhost://3000
Logs will be displayed on backend console.
One can also see the Logs in CloudWatch inside my-log-group (created before).
Here the logs can be seen with details provided in above code. From above logs, one can know the timestamp, logs type, request id, service name, page name, line number and message.
3. Setup OpenSearch
In Data Nodes section:
In Fine-grained access control section:
It will take few minutes to Create Elastic Search domain.
领英推荐
4. Setup IAM Policy
{
? ? "Version": "2012-10-17",
? ? "Statement": [
? ? ? ? {
? ? ? ? ? ? "Action": [
? ? ? ? ? ? ? ? "es:*"
? ? ? ? ? ? ],
? ? ? ? ? ? "Effect": "Allow",
? ? ? ? ? ? "Resource": "arn:aws:es:<AWS-region>:<account-id>:domain/application-log/*"
? ? ? ? },
? ? ? ? {
? ? ? ? ? ? "Effect": "Allow",
? ? ? ? ? ? "Action": "logs:CreateLogGroup",
? ? ? ? ? ? "Resource": "arn:aws:logs:<AWS-region>:<account-id>:*"
? ? ? ? },
? ? ? ? {
? ? ? ? ? ? "Effect": "Allow",
? ? ? ? ? ? "Action": [
? ? ? ? ? ? ? ? "logs:CreateLogStream",
? ? ? ? ? ? ? ? "logs:PutLogEvents"
? ? ? ? ? ? ],
? ? ? ? ? ? "Resource": [
? ? ? ? ? ? ? ? "arn:aws:logs:<AWS-region>:<account-id>:log-group:/aws/lambda/LogsToElasticsearch_application-log:*"
? ? ? ? ? ? ]
? ? ? ? }
? ? ]
}
5. OpenSearch Service Subscription
6. Update Lambda Function
7. Setup OpenSearch Dashboard (Kibana)
One can see all the logs in the below tables.
Export AWS CloudWatch Logs to S3
Log retention in CloudWatch is very expensive, which is why it would be cost-effective to move the old logs to the S3 bucket.
Previously, the log retention period was set to 3 days/ 5 days. After this period logs will be automatically removed from the CloudWatch logs.
Here, a bucket needs to be set up and configured with an automatic process that will move the logs to bucket every day.
8. Setup S3 bucket
{
? ? "Version": "2012-10-17",
? ? "Statement": [
? ? ? ? {
? ? ? ? ? ? "Effect": "Allow",
? ? ? ? ? ? "Principal": {
? ? ? ? ? ? ? ? "Service": "logs.<AWS-region>.amazonaws.com"
? ? ? ? ? ? },
? ? ? ? ? ? "Action": "s3:GetBucketAcl",
? ? ? ? ? ? "Resource": "arn:aws:s3:::<bucket-name>"
? ? ? ? },
? ? ? ? {
? ? ? ? ? ? "Effect": "Allow",
? ? ? ? ? ? "Principal": {
? ? ? ? ? ? ? ? "Service": "logs.<AWS-region>.amazonaws.com"
? ? ? ? ? ? },
? ? ? ? ? ? "Action": "s3:PutObject",
? ? ? ? ? ? "Resource": "arn:aws:s3:::<bucket-name>/application-log/*",
? ? ? ? ? ? "Condition": {
? ? ? ? ? ? ? ? "StringEquals": {
? ? ? ? ? ? ? ? ? ? "s3:x-amz-acl": "bucket-owner-full-control"
? ? ? ? ? ? ? ? }
? ? ? ? ? ? }
? ? ? ? }
? ? ]
}?
9. Link S3 bucket to CloudWatch
This is the manual process of archiving logs to S3.
In the next step, the Automatic Process of archiving logs to S3 will be set up using lambda and EventBridge.
10. Setup lambda for archiving logs
?
const AWS = require('aws-sdk')
const cloudConfig = {
? apiVersion: '2014-03-28',
? region: 'AWS-REGION', // replace with your region
? accessKeyId: "YOUR_ACCESS_KEY_ID",
? secretAccessKey: "YOUR_SECRET_ACCESS_KEY",
}
const cloudWatchLogs = new AWS.CloudWatchLogs(cloudConfig);
?
exports.handler = ?async (event, context) => {
? ? const params = {
? ? ? ? destination: 'application-log-archive', // replace with your bucket name
? ? ? ? from: new Date().getTime() - 8640000,
? ? ? ? logGroupName: 'my-log-group', ? // replace with your cloudwatch log's group name
? ? ? ? to: new Date().getTime(),
? ? ? ? destinationPrefix: 'application-log', // replace with random string used to give permission on S3 bucket
? ? };
? ? await cloudWatchLogs.createExportTask(params).promise().then((data) => {
? ? ? ? console.log(data)
? ? ? ? return ({
? ? ? ? statusCode: 200,
? ? ? ? ? ? body: data,
? ? ? ? });
? ? }).catch((err) => {
? ? ? ? console.error(err)
? ? ? ? return ({
? ? ? ? statusCode: 501,
? ? ? ? ? ? body: err,
? ? ? ? });
? ? });
}
?
This Trigger function, will trigger the application-log-export-to-s3 lambda function every day and logs will be automatically archived to the newly created s3 bucket every day.
Note: One can export the logs to Either Amazon OpenSearch or in AWS S3 bucket or one can do both the processes depending upon the requirements. Debugging logs in Amazon OpenSearch is easier than S3, but it is little bit costly than AWS S3 bucket.
AWS X-RAY
AWS X-Ray makes it easy for developers to analyze the behavior of their distributed applications with end-to-end tracing capabilities. One can use X-Ray to identify performance bottlenecks, edge case errors, and other hard to detect issues. X-Ray supports applications, either in development or in production, of any type or size, from simple asynchronous event calls and three-tier web applications to complex distributed applications built using a MicroServices architecture. This enables developers to quickly find and address problems in their applications and improve the experience for end users of their applications. Following are the features of X-RAY: -
AWS X-RAY is integrated with CloudWatch so that one can view logs, metrics, and traces in one place.
Authors
Developer / Tech Lead @ HitachiVantara | Node.js | Javascript | Typescript | Serverless | React | AWS | ELKB | NoSql | Redis | Mqtt | MySql | PostgreSQL | Docker | Microservice Architect | Performance Optimization
2 年Article really informative an guide to many developers. But have a question that why do we need a more wrapper of module winston-aws-cloudwatch? You can achieve same thing with winston + aws-sdk?
Distinguished Member Of Technical Staff at Alumnus Software Limited
2 年Well, microservices provide a lot of gain but a lot of pain too. You need to have the right tools and tricks to stay on top.