Kafka on Single Node: A real time use case leveraging Pega &
Springboot : Part - 1

Kafka on Single Node: A real time use case leveraging Pega & Springboot : Part - 1

In this article I’m going to explore one real time use case for the usage of Kafka by leveraging the Pega and Springboot applications.

I wrote this article to make it easier for the Pega developers by covering the Kafka from a very holistic perspective. This article will explain how Pega & Springboot applications talks to each other by having the “Kafka” in the middle of them.

Business Scenario:

Let’s say there is an organization “ABC” which stores their customer related information using a Java Springboot application ever since the beginning. Assume there are some millions of customers worldwide and there’s a lot of activity goes around the world daily involving the customer details information.

Now, with the evolution of Pega low code, let’s assume that “ABC” organization wanted to store their customer information in Pega servers and they have begun building the application and everything and they started to store the information in Pega database servers.

From here on, I will refer the “Springboot” application as a Legacy application and “Pega” application as a new application.

Since, there are millions of customers around the world, the organization employs some 100’s of CSRs across the world to interact with the customers and serve their queries.

Both the legacy applications & new application built on Pega stores the customer data. Now, if a customer wants to update any of their personal details, they reach out to the respective CSRs to get their data updated.

The legacy application is still being used and not fully replaced. So, some customers may call to the CSRs who are handling legacy application & some customers may call to the CSRs that are handling Pega application.

?

Problem:

There are millions of customers across the globe and during this whole process, sometimes there has been a customer data mismatch in both the applications.

For ex, if few data fields have been updated in legacy application by the legacy application related CSR, it hasn’t been reflected in the Pega DB and vice versa.

Let’s assume there are lots of data tables involved that require changes when a customer data is modified by a CSR.

So, if we employ a normal API call to update the systems, it is very time consuming, and business doesn’t want that happening.

Since, there’s a lot of interactions daily across the world, the business wanted to address this problem as soon as possible.

So, what can we do about it?

Solution:

If you closely observe the above scenario, it is more of a bidirectional communication. If the data is updated in legacy application, the same data must be updated in multiple Pega tables and similarly, if some data has been updated by a CSR in Pega end, the same data must be updated within Springboot applications again.

Few things to consider:

-???????????? We want an asynchronous communication.

-???????????? Communication must be bidirectional.

-???????????? Performance of the systems shouldn’t be affected because there are lot of database tables to be updated in almost real time.

-???????????? Solution should be futuristic.

-???????????? If tomorrow, another application of different technology comes up and wants to store the information, there shouldn’t be any hassle.

“Kafka” – We can employ Kafka for such scenarios.

This article expects you to have a basic understanding of how Kafka works.

How “Kafka” helps address the above issue?

In Kafka, we have something called “Topics”. And as we all know, Kafka works in a Pub/Sub model.

For the above scenario, I will create the below topics.

1.???? PegaToJava

-???????????? Pega will be the producer.

-???????????? Springboot will be the consumer.

2.???? JavaToPega

-???????????? Springboot will be the producer.

-???????????? Pega will be the consumer.

If a CSR modifies the customer data in Pega application, Pega is going to produce a message to the “PegaToJava’ topic. Springboot app will be the consumer to this topic.

And vice-versa.

I hope you’re able to understand where we are heading to.

Assume, if something else comes up tomorrow. Let’s say we have a dotnetapplication which wants to store the same customer data in their systems.

Topics can be created as follows:

1.???? PegaToSpringnDot

-???????????? Pega will be the producer.

-???????????? Springboot & DotNet applications are the consumers.

2.???? SpringToPegaDot

-???????????? Springboot will be the producer.

-???????????? Pega & DotNet will be the consumer.

3.???? DotToPegaSpring

-???????????? DotNet will be the producer.

-???????????? Spring and Pega are consumers.

From here on I will jump into the hands-on part.

I will achieve the above scenario by following the below steps.

1.???? Setup Kafka on a Single Amazon EC2 node.

2.???? Configure a Springboot application and connect to the Kafka topics.

3.???? Configure a Pega application and connect to Kafka.

In this article, I will cover the Kafka installation and configuration details.

-???????????? I’m using academy.pega.com’s pega cloud instance for this POC.

-???????????? I’m using AWS EC2 to install Kafka Server.

-???????????? Using personal laptop to run my springboot application.

Setup Kafka on a Single Amazon EC2 node

In this article, I’m going to install a Kafka by myself in a Amazon ec2 instance. I prefer to do this way because we will exactly know how Kafka is working.

Note: Not using Aws MSK solution which is an aws managed kafka.

Create a AWS Ec2 instance with amazon linux image. I created a ec2 instance of type t2.micro.

Connect to the ec2 instance.

Run the below commands.

1.???? sudo yum install java-1.8.0 -y

This will install a java package. Kafka is built on java & hence, Java needs to be present on your host machine.

2.???? wget https://downloads.apache.org/kafka/3.5.2/kafka_2.13-3.5.2.tgz

This will download the Kafka package.

3.???? tar -xzf kafka_2.13-3.5.2.tgz

Extracts kafka files into a directory kafka_2.13-3.5.2

4.???? cd kafka_2.13-3.5.2 – Navigate to this directory.

5.???? Navigate to: cd config

And open server.properties file using : vi server.properties

Find the below line. Un comment it and update the ipv4 with your Ec2 instance’s public ipv4 address.

When you do this, you’re exposing your Kafka that is installed on your ec2 to the internet. Configure security group accordingly.

6.???? Go back to bin folder.

cd ..

cd bin

Here you will find list of shell script files to run your kafka server.

Before starting your kafka server, try to make this change as well.

Open the kafka-server-start.sh file using vi kafka-server-start.sh

Make the below changes:

Change values to 256M & 128M.

The reason I’m doing this is because, I ‘m using a AWS free trial ec2 instance which has low RAM. By making this change, we’re adjusting the JVM memory usage. When Kafka runs, behind the scenes the java files gets executed and it consume less RAM usage when we do this.

Note: If you’re using your own laptop or using high end ec2 machine, you need not make this particular change.

Now, we are good to start our Kafka server.

Run the below commands to in same order.

-???????????? cd kafka_2.13-3.5.2 – Navigate to this directory and run below one.

-???????????? “bin/zookeeper-server-start.sh config/zookeeper.properties”

This command will start the Kafka Zookeeper.

-???????????? Open another terminal. Navigate to cd kafka_2.13-3.5.2 ?and run below command now.

-???????????? bin/kafka-server-start.sh config/server.properties

This will run the actual kafka server.

Once, the zookeeper is up & Kafka server is running, you’re good to go.

Now, create the two topics that we have discussed before. Run below commands.

bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic PegaToJava --partitions 1 --replication-factor 1        
bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic JavaToPega --partitions 1 --replication-factor 1        

PegaToJava: We will configure rules in Pega to send messages to this Topic. Likewise, Springboot will consume these messages from that topic.

JavaToPega: Viceversa of the above topic.

Springboot will produce to the topic.

Pega will consume from this topic.

Run the below command to check the topics that you created.

bin/kafka-topics.sh --bootstrap-server localhost:9092 --list

We’re done with the KAFKA SETUP and THE TOPIC CREATION.

What Next ?????

Observe the below Database tables.

Pega Database – New application :

Spring boot – Legacy Application:

If you look at the legacy application related database and new pega application database, the data is consistent here.

Now, recall the “Problem” statement that I mentioned at the start of this article.

If a Customer calls a Pega CSR and gets their data modified, the data is upto date in Pega but not in Springboot.

Likewise,

If a Customer calls to a Springboot app CSR and gets their data modified, the data is upto date in Springboot but not in Pega.        

We're going to see how the above problem can be solved.

In the upcoming article, you can find the below things:

1.??? From Pega, how do we connect to the Kafka instance that we just installed ?

2.??? What are the rules to be configured in Pega to produce messages to the Kafka?

3.??? What are the rules to be configured in Pega to consume the messages that are coming from the Springboot?

THANKS FOR MAKING IT TILL HERE.        

Please share it among your network if you find it useful.

Ekta Shukla

PEGA Developer at Coforge UK Limited| CSA | CSSA | PCDC | Ex-TCS

11 个月

Great work Vishnu

Great Read , waiting for part 2

Praveen Inakollu

Technical Systems Engineer at Cisco | Ex-[ Standard Chartered ] | Talks about #pega #pegacareer #pegatech

11 个月

Great Vishnu…keep it going

要查看或添加评论,请登录

Vishnu Vardhan KVP的更多文章

社区洞察

其他会员也浏览了