Building a Real-Time Event-Driven Application with Node.js and Apache Kafka on Heroku.
Hello Everyone
Mad Scientist Fidel Vetino here to walk you through on building a simple event-driven application with Kafka on Heroku
I will dive in building, deploying, and managing a real-time event-driven application using Node.js and Apache Kafka on Heroku. Each phase includes detailed explanations of the purpose of the steps and all the necessary code.
Project Overview
We'll simulate weather sensors (producers) generating random data (temperature, humidity, barometric pressure) and sending these events to Apache Kafka. A consumer will listen to these topics and log the received data. The entire setup will be deployed to Heroku, where we'll monitor events using Heroku logs.
Prerequisites
Setting Up a Kafka Cluster on Heroku
Step 1: Log in via the Heroku CLI
Purpose: Authenticate your local Heroku CLI with your Heroku account.
Open your terminal and log in to Heroku:
shell
heroku login
Step 2: Create a Heroku Application
Purpose: Create a new Heroku application to host your project.
Create a new Heroku app:
shell
heroku create weather-eda
You can choose a unique name for your app.
Step 3: Add the Apache Kafka on Heroku Add-On
Purpose: Provision a Kafka instance as a service on Heroku.
Add Apache Kafka to your Heroku app:
shell
heroku addons:create heroku-kafka:basic-0
Step 4: Get Kafka Credentials and Configurations
Purpose: Retrieve the necessary Kafka credentials and configuration details for your application.
Fetch your Kafka credentials and configurations:
shell
heroku kafka:topics
Step 5: Install the Kafka Plugin Into the Heroku CLI
Purpose: Install the Kafka plugin to interact with Kafka through the Heroku CLI.
Install the Kafka CLI plugin:
shell
heroku plugins:install heroku-kafka
Step 6: Test Interacting With the Cluster
Purpose: Ensure that you can produce and consume messages in Kafka.
Create a test topic:
shell
heroku kafka:topics:create test-topic
Produce an event to the test topic:
shell
echo "Test Message" | heroku kafka:topics:produce test-topic
Consume the event:
shell
heroku kafka:topics:tail test-topic
Destroy the test topic:
shell
heroku kafka:topics:destroy test-topic
Prepare Kafka for Our Application
Purpose: Create the Kafka topic that your application will use.
Prepare Kafka by creating the necessary topics for your application:
领英推荐
shell
heroku kafka:topics:create weather-data
Build the Node.js Application
Step 1: Set Up the Project
Purpose: Initialize a new Node.js project.
Initialize a new Node.js project:
shell
mkdir weather-eda
cd weather-eda
npm init -y
Step 2: Install Dependencies
Purpose: Install the necessary libraries for Kafka integration.
Install the KafkaJS library:
shell
npm install kafkajs
Step 3: Create the Producer
Purpose: Set up a producer to generate and send weather data to Kafka.
Create a file named producer.js:
javascript
const { Kafka } = require('kafkajs');
// Kafka configuration
const kafka = new Kafka({
clientId: 'weather-producer',
brokers: ['<KAFKA_BROKER_URL>']
});
const producer = kafka.producer();
// Produce random weather data
const run = async () => {
await producer.connect();
setInterval(async () => {
const data = {
temperature: Math.random() * 100,
humidity: Math.random() * 100,
pressure: Math.random() * 1000,
};
await producer.send({
topic: 'weather-data',
messages: [{ value: JSON.stringify(data) }],
});
console.log('Produced data:', data);
}, 5000);
};
run().catch(console.error);
Step 4: Create the Consumer
Purpose: Set up a consumer to listen for and log weather data from Kafka.
Create a file named consumer.js:
javascript
const { Kafka } = require('kafkajs');
// Kafka configuration
const kafka = new Kafka({
clientId: 'weather-consumer',
brokers: ['<KAFKA_BROKER_URL>']
});
const consumer = kafka.consumer({ groupId: 'weather-group' });
// Consume weather data
const run = async () => {
await consumer.connect();
await consumer.subscribe({ topic: 'weather-data', fromBeginning: true });
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log('Consumed data:', message.value.toString());
},
});
};
run().catch(console.error);
Step 5: Define Processes in Procfile
Purpose: Define the processes to be managed by Heroku.
Create a Procfile in the project root:
shell
web: node producer.js
worker: node consumer.js
Step 6: Deploy and Test the Application
Purpose: Deploy your application to Heroku and ensure it functions as expected.
Deploy the application to Heroku:
shell
git init
heroku git:remote -a weather-eda
git add .
git commit -m "Initial commit"
git push heroku master
Scale the application to run the consumer:
shell
heroku ps:scale worker=1
Monitor the events using Heroku logs:
shell
heroku logs --tail
Through this project, you've learned how to set up a Kafka cluster on Heroku, build a Node.js application to produce and consume events, and deploy the entire setup to Heroku. This architecture allows for real-time data processing and easy scalability, showcasing the power of event-driven architectures. With this foundation, you're now equipped to explore more complex use cases and further enhance your applications with Apache Kafka and Heroku. Happy coding and innovating!
Fidel V (the Mad Scientist)
Project Engineer || Solution Architect || Technical Advisor
Security ? AI ? Systems ? Cloud ? Software
.
.
.
.
.
.
?? The #Mad_Scientist "Fidel V. || Technology Innovator & Visionary ??
#Space / #Technology / #Energy / #Manufacturing / #Biotech / #nanotech / #stem / #cloud / #Systems / #Automation / #LinkedIn / #aviation / #moon2mars / #nasa / #Aerospace / #spacex / #mars / #orbit / #AI / #AI_mindmap / #AI_ecosystem / #ai_model / #ML / #genai / #gen_ai / #LLM / #ML / #Llama3 /algorithms / #SecuringAI / #python / #machine_learning / #machinelearning / #deeplearning / #artificialintelligence / #businessintelligence / #Testcontainers / #Docker / #Kubernetes / #unit_testing / #Java / #PostgreSQL / #Dockerized / #COBOL / #Mainframe / #Integration / #CICS / #IBM / #MQ / #DB2 / #DataModel / #zOS / #Quantum / #Data_Tokenization / #HPC / #QNN / #MySQL / #Python / #Education / #engineering / #Mobileapplications / #Website / #android / #AWS / #oracle / #microsoft / #GCP / #Azure / #programing / #future / #creativity / #innovation / #facebook / #meta / #accenture / #twitter / #ibm / #dell / #intel / #emc2 / #spark / #salesforce / #Databrick / #snowflake / #SAP / #spark / #linux / #memory / #ubuntu / #bigdata / #dataminin / #biometic #tecnologia / #data / #analytics / #fintech / #apps / #io / #pipeline / #florida / #tampatech / #Georgia / #atlanta / #north_carolina / #south_carolina / #ERP /
#Business / #startup / #management / #marketingdigital / #entrepreneur / #Entrepreneurship / #SEO / #HR / #Recruitment / #Recruiting / #Hiring / #personalbranding / #Jobposting / #retail / #strategies / #smallbusiness / #walmart / #MuleSoft / #VPN / #migration / #configuration / #encryption / #deployment / #Monitoring / #Security / #cybersecurity / #itsecurity / #Cryptographic / #Obfuscation / #RBAC / #MFA / #authentication / #IPsec / #SSL /