Server-Sent Events Using Spring WebFlux and Reactive Kafka
Image Credit: Kafka

Server-Sent Events Using Spring WebFlux and Reactive Kafka

Let's walk through the architecture of servers pushing data to the clients using Server-Sent-Events and Reactive Kafka. Take a technical deep dive with Gagan Solur Venkatesh .

We will look at data streaming from a reactive Kafka-based WebFlux REST server to a Webflux client in a non-blocking manner.

Below designed architecture can be used to:

  • Push data to external or internal apps in near real-time.
  • Push data onto the files and securely copy them to cloud services.
  • Push the same data over to multiple clients from a Kafka topic.

Let's get started!

No alt text provided for this image


Before we execute a sample application to demonstrate Server-Sent Events (SSE) using Spring WebFlux and Reactive Kafka, let's understand the fundamental concepts:

What are Server-Sent Events?

Server-Sent Events (SSE)?is a Server Push technology that allows a client to receive automatic server updates through the HTTP connection.

No alt text provided for this image

The SSE can be used to:

  • Replace Long polling (which creates a new connection for every pull) by maintaining a single connection and maintaining a continuous event stream.
  • Enable apps that use one-way data communication (e.g., e-commerce websites, and live stock price updates).

What is Spring WebFlux?

Spring WebFlux ?framework is a fully asynchronous and non-blocking reactive web stack that enables the handling of a massive number of concurrent connections. WebFlux supports?Reactive Streams ?back pressure and runs on such servers as Netty. It enables us to vertically scale the services to handle the greater load on the same hardware.

What is Reactive Kafka?

Reactive Kafka ?is a reactive API for Kafka based on the project Reactor and the Kafka producer/Consumer API. It enables data to be published and consumed from Kafka using functional API with non-blocking back pressure and low overheads, allowing reactive Kafka to be integrated with other reactor systems and providing an end-end reactive pipeline.

Note: to gain a thorough grasp of Webflux and Reactive Kafka, make sure to understand the terminology.

As per the architecture shown in image-1, we will build a WebFlux Server using the Spring WebFlux framework and reactive Kafka, exposing a REST API for the clients to make secure HTTP requests.

Once a secure connection between the client and the web flux server is established, it consumes messages from Kafka topics. It pushes the data asynchronously without closing the connection with the client unless required.

Rather than build a reactive Kafka producer, we will leverage the existing producer example on the reactor repository. Instead of building a web flux client, we will test the server's SSE response using a curl command on the terminal.

Pre-requisites:

Let's create a Spring boot application using the following?dependencies ?shown below.

No alt text provided for this image
3. pom.xml

Let's create a?Kafka Receiver Configuration , which is a consumer, as shown below. It is configured with generic?GROUP_ID_CONFIG?since we are working on handling a single client for now by enabling?auto commits?& always reading the?earliest?messages, but we can also update it to the _latest.

If we enable multiple clients, each client can receive messages from the same topic based on the last committed offset.

In order to keep things simple, we will be dealing with String Deserializers, which can be extended to generic JSON/AVRO schemas.

No alt text provided for this image
4. Kafka Receiver Configurations

After the configurations are ready, we will create a?REST controller ?that consumes messages from Kafka topics and sends back responses as the flux of data. Use MediaType.TEXT_EVENT_STREAM_VALUE as the content type. This tells the client that a connection will be established and the stream is open for sending events from the server to the client.

No alt text provided for this image
5. REST SSE Controller

Before we test this application, here is a sample?producer ?we're leveraging from the reactor repository with StringSerializer:

Start the Kafka server in the localhost:9092 and create a topic as shown in the above configurations.

No alt text provided for this image
6. Kafka Sender

The following commands would be helpful.

$ bin/zookeeper-server-start.sh config/zookeeper.properties
$ bin/kafka-server-start.sh config/server.properties
$ bin/kafka-topics.sh — create — bootstrap-server localhost:9092 — replication-factor 1 — partitions 1 — topic <topic_name>
$ bin/kafka-topics.sh — list — bootstrap-server localhost:9092        

The image below shows the topic created with partition 1 and without any message (Count=0) being produced from Kafka Sender.

No alt text provided for this image
7. Conduktor

Next, let's run the Spring-boot application on localhost:8080 and make a sample request to this server using a curl command on the terminal, which keeps the connection alive.

curl — location — request GET 'localhost:8080/sse' \\  
 — header 'Content-Type: text/event-stream;charset=UTF-8' \\  
 — header 'Accept: text/event-stream;charset=UTF-8'        
No alt text provided for this image
8. Spring boot application and Kafka consumer is registered.

A Kafka receiver is registered once the curl command is executed on the terminal (as shown in the console above).

Now, let's push some data onto the Kafka topic by running Kafka Sender and see how the data is being received in the terminal, which is acting as the client here.

No alt text provided for this image
9. Server-Sent Events received

As we can see in Image-9, once messages are published onto the topic, the data gets pushed onto the terminal- which is Server-Sent Events (SSE).

Image-10 shows the Conduktor tool tracking the consumer and displaying the messages consumed by the respective client.

No alt text provided for this image
10. Conduktor

We can see that 40 messages were published & all messages were consumed (Lag=0, End-Current=40–40) and sent to the client, as shown in the terminal.

This project can be expanded to support multiple clients by making GROUP_ID_CONFIG unique and setting offsets to the latest for each client; this creates a new consumer group for each client by keeping the connection alive and streaming the data asynchronously.

In case any client loses the connection with the server and is able to re-establish a secure connection post-interruption, the client will rejoin the existing consumer group and receive data from the previous offset committed.

This architecture can also be used to create a batch scheduler to consume & transfer the message to the files and securely copy them onto cloud services, allowing the client to access files for further processing.

If you enjoyed this basic concept walkthrough of SSE using Spring WebFlux and Reactive Kafka, please feel free to share & follow our publication!

Refer code?here .

Click here to see the original article.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了