Vert.X and Micro Services

Vert.X and Micro Services

I'm going to cover here how to implement Micro-Services inside a Vert.X server.

 

What's Micro-Service-Architecture (MSA) all about?

 

"In computing, microservices is a software architecture style in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small building blocks, highly decoupled and focused on doing a small task, facilitating a modular approach to system-building.”- Wikipedia.

 

Do one thing, and do it well (and fast):

 

MSA is all around of a small piece of code that executes a small single B.L. requirement, it can be running fast and decoupled from other Micro-Services.

 

It's like you only do one thing (let's say, tokenizing of words) and the next guy will pick up and make a usage of those words (let's say tagging those words).

 

This small mission is decoupled from other jobs, so it can be easily adapted, reused and maintained .

 

Since Vert.X is all about 'Don't Block Me' concept, MSA is well fitted in here.

 

SM (micro-service) can easy be replaced by a better one, if and when needed, without effecting the system.

 

If that's not good enough, as MSAs are small and isolated, it can easily be unit-tested.

 

Having said all that, we must carefully divide our missions into MSs. We can't go too grained. That's because if we go too grained (AKA ending with a Nano Service), overhead in terms of communication and broken B.L. might kill us.

What's Vert.X?

Vert.X is a lightweight framework that's made to handle thousands of calls per second.

It runs on a JVM, and can invoke Java, Groovy, Ruby, Python and JavaScript code. It offers a nice set of API calls to serve the concept of Event Driven pattern.

It provides a fast built in Event Loop (Event Bus) following the React Pattern.

The basic idea here is to avoid blocking the Event Loop- AKA, 'Don't Block Me'- so multiple calls can be served.

It can go easily to cluster to maintain HA.

To avoid blocking the flow is running in an asynchronously fashion.

The basic component  Vert.X is offering is called a Verticle. Any Verticle can represent a service- in our case a Micro-Service.

Vert.X supports two types of Verticles, a Standard Verticle and a Worker Verticle.

Standard Verticle- is always executed inside the Event Pool thread. And for that is good for lightweight jobs. Standard Verticle can execute a piece of code asynchronously on other thread, via calling executeBlocking(), to free the Event Loop for other jobs.

Worker Verticle- is running from the worker pool threads. Basically it can be divided into two flavors, using a single thread, or a multi-threaded that can be executed in parallel by different threads.

Vert.X is providing a built in fast EventBus to run messaging between Verticles, or other components (internal to Vert.X or even at the frontend).

Both sending and publishing methods are supported by Vert.X.

OK, those were a few words on what we're going to discuss here.

Let's talk code now!

We're going to analyze a real life scenario of implementing a Bot machine to serve thousands of parallel calls.

I'll take the time to go into details on how the Bot is implemented on a separated post in the near future.

For now let's have the focus on the API level.

The services that need to be implemented are:

1. get a reply for a given text.

Well… only one actually. We put in some text and get a reply.

Yet, that's not a good practice because what we're going to end up with is a one giant monolith service.

So let's try to divide and use those:

a. findReplyInDomain()- try to match a reply within a specified linguistic domain.

b. findDomainForText()- does the text belongs to sports, greeting etc.

c. getMoodForText()- what's the mood (sentiment) of the text in terms of negative, positive.

d. shouldMoveFromState()- is the conversation ready to move forward to the next level?

And a few more…

The basic assumption is that those services are decoupled and run fast. So none will block the Event Loop. Should we bump into a blocking service we'll switch to a Worker or cal  executeBlocking().

We're going to have assigned a Verticle to each of those micro services.

The flow will be enforced via the EventBus (Event Loop) which is shared by the Vert.X.

So what we're going to have is as below:

And we're going to end up with* :

* FW/LB are omitted.

 

Writing a Veticle:

 

As of V3.0, a Verticle is an extension of AbstractVerticle. You need to overwritten the start() (where the show runs in) and close() (to clean up resources, if needed).

 

A Verticle is deployed by calling to vertx.deployVerticle(). Just after the Verticle is deployed, the start() method is called for you. You can deploy an Object (AKA one that you created already), or let reflection to do the job for you via providing a fully qualified name of the Verticle.

 

You can also undeploy a Verticle.

 

How all is connected together:

 

Deploying the Verticles:

 

// Create a new instance of Vert.X.

 

Vertx vertx = Vertx.vertx();

 

// Create a Router that needs to be shared across all Verticle Objects.

 

Router router = Router.router(vertx);

 

// This one handles HTTP requests for static content. It's optional.

 

vertx.deployVerticle(new HTTPServer(port,router));

 

// Those ones are to serve the conversation engine that drives the reply for the given text.

 

vertx.deployVerticle(new MoodServer(router));

 

vertx.deployVerticle(new TextDomainServer(router));

 

// and a few more Verticles...

 

// This one handles getReplyForText()

 

vertx.deployVerticle(new ConversationServer(router));

 

Any each Verticle will take the form of:

 

public class ConversationServer extends AbstractVerticle{

 

/**

 

* The starting point.

 

*

 

* @param fut- the future action.

 

*/

 

@Override

 

public void start(FutureVoid> fut){

 

     // Handle any initialization process here.

 

     startApp(fut)

 

}

 

/**

 

* This is called on shutting down the server.

 

*

 

* @param stopFuture

 

*

 

* @throws Exception

 

*/

 

@Override

 

public void stop(Future stopFuture) throws Exception{

 

   // Should something be cleaned up?

 

}

 

/**

 

* The starting point.

 

*

 

* @param fut- the future to use here.

 

*/

 

private void startApp(FutureVoid> fut){

 

   // Allows events for the designated addresses in/out of the event bus bridge.

 

   // Inbound allows to get requests in the format of api_THAT_EVER.

 

   // Outbound is not restricted.

 

   BridgeOptions opts = new BridgeOptions()

 

        .addInboundPermitted(new PermittedOptions().setAddressRegex("api_[g,s]+et.+"))

 

         .addOutboundPermitted(new PermittedOptions().setAddressRegex(".+"));

 

   // Create the event bus bridge and add it to the router.

 

   SockJSHandler sockJSHandler = SockJSHandler.create(vertx);

 

   sockJSHandler.bridge(opts);

 

   router.route("/eventbus/*").handler(sockJSHandler);

 

   EventBus eb = vertx.eventBus();

 

  // Register to listen for messages coming IN to the server from the client or other    //services.

 

   eb.consumer(“a_path”).handler(message -> {

 

     JsonObject json = (JsonObject)message.body();

 

      sendReply(json,message);

 

   });

 

}

 

/**

 

* Handles the messages and send back the reply.

 

*

 

* @param json- the JSON Object gotten from the client.

 

* @param message- the message gotten from the EventBus.

 

*/

 

private void sendReply(JsonObject json, Message message){

 

  // Do something useful here, and generate a new JSON Object as a reply.

 

  JsonObject replyJSON = generateReplyMessage(json);

 

  message.reply(replyJSON);

 

}

 

}

 

 

 

 

 

 

Very nice article!

回复
Hagay Onn (the Spot)

InnovatiOnn ■ AI Lectures, Art, Consulting & Development ■ SW Architecture, Design, Implementation & Optimizations (Cloud, Data Pipelines, Automations) ■ Former C++ & Java RT developer. Current: Python & JS dev.

8 年

Wow, really learned some vert here! Great article, Thank you Eran Shaham! p.s. a bit too much white space in between... ;-)

回复

要查看或添加评论,请登录

Eran Shaham的更多文章

  • Microservices Chatbot and Coronavirus

    Microservices Chatbot and Coronavirus

    A few weeks ago I shared a short post about a new initiative of mine to have a fun bot to make life much easier in…

  • Docker image build vs. jib

    Docker image build vs. jib

    Jib is an open-source Java containerizer originally coming from Google. Jib allows to build Docker images from Java…

  • A JSON schema validator

    A JSON schema validator

    A simple JSON schema validator for the Vert.x world.

    2 条评论
  • vertx-lucene-classification

    vertx-lucene-classification

    Lucene is here for a long time, ML was added to Lucene for a few releases now, yet some aspects were left out. ML can…

  • Kafka vs. Pulsar

    Kafka vs. Pulsar

    Kafka is here for a long time. Perhaps too long.

    1 条评论
  • UMLet- an open source UML tool

    UMLet- an open source UML tool

    Some aspects of my day job work are drawing many diagrams. That's part of an architect role to create design documents…

    2 条评论
  • Revive- a Single Page Application framework

    Revive- a Single Page Application framework

    I'm uploading a short presentation about a new open sourced Revive which I've made public. Revive is a new light open…

  • A few words on Docker and Kubernetes

    A few words on Docker and Kubernetes

    We all know Docker Engine; it’s a container runtime. We can run “docker run” on a host whether it’s a server or a VM…

    2 条评论
  • A poor man Dependency Injection

    A poor man Dependency Injection

    Dependency Injection (DI) has been around for a while now. A typical use case would be, for instance, the same piece of…

  • Apache Storm and big data

    Apache Storm and big data

    A background: Big data is here for a while now. At the practical level, big data helps us to better understand our…

社区洞察

其他会员也浏览了