Distributed Intelligent Merchant Payment Routing Orchestration Engine With Akka Cluster, Kafka, Go, AI, Drool for Scalability & HA. FinTech Use Case.

Distributed Intelligent Merchant Payment Routing Orchestration Engine With Akka Cluster, Kafka, Go, AI, Drool for Scalability & HA. FinTech Use Case.

The emergence of e-commerce and technology-led initiates are key factors fueling the digital payment market trends. The credit card industry is big business and massive. The number of people using credit cards for purchase is over $3 trillion per year. The amount people use cash and checks as a payment method is falling off a cliff in favor of digital payments, in particular, credit and debit cards.

The online payments world has been evolving really fast during the last years. Indeed, local Payment Service Providers, also called PSPs e.g PayPal, Stripe, Chase Paymentech, Adyen, Alipay, and many more are becoming more and more commonly used and new regulations (especially PSD2) do not make the task easy when it comes to accepting payments online with great efficiency.

In the Fintech industry, Merchant Service is a processing service that allows merchants to accept transactions through credit and debit cards, as well as mobile payments. Merchant services can also encompass payment gateways, gift cards, loyalty programs, online transaction processing, and more. The merchant typically integrates with Payment Service Provider (PSPs), they route the payment request to PSP which in turn route to card network Visa, Master and others.

The diagram below depicts the credit card payment merchant workflow.


No alt text provided for this image

In today's e-payment markets, merchants are eager to integrate with multiple payment service providers from local to regions across global in order to avoid technical downtimes and better serves their customers. Merchants today need to do a lot more than they did just five years ago. From advancing technologies to cross-channel payments, it’s tough not only to deliver on it all but also to do it well. They need to integrate with multiple payment service providers and build Intelligent Payment Routing capabilities to their payment infrastructure. On of the critical business decision to make for most merchants today, do they want to build this capability in-house or integrate with Payment Orchestration Platforms Saas Provider? from the economy of scale standpoint, merchants are better off to integrate with Payment Orchestration Saas Provider to better optimized their Intelligent Payment Routing capabilities.

What Is Intelligent Merchant Payment Routing?

Intelligent Payment Routing is the ability of a payment engine to dynamically route transactions to the Acquiring Bank and Payment Service Providers (PSPs) endpoint that is most likely to approve a transaction using merchant intelligently business rules. The rules include retry of decline transactions, routing to the lower-cost fee, routing by the time of day, routing by BIN, routing by regional location of the issuing bank, routing by amount, etc. This ability to dynamically route transactions makes intelligent payment routing engine different from traditional payment gateways. The routing rule can be built around a Machine Learning Model (AI), payment analytics, and custom business rule engine configuration such as Drool. The engine is able to route transactions to the best acquiring bank to ensure maximum payment success rates. In doing so, global merchants can accept payment from customers anywhere all over the world and increase their percentage of approved transaction rates and increase their revenue.

In this article, we are going to build a distributed Intelligent Merchant Payment Routing Orchestration engine at scale using various technology stacks such as Golang, Kafka, Akka Cluster, Scala which can process millions of payment instructions with lower latency and high throughput.

Use Case

Lasgidi Pay Inc. is a global merchant payment orchestration Saas provider. It provides cutting-edge payments and all in one payment integration solutions to help startups to enterprise merchant businesses simplify e-payment and integration with Acquirer and Payment Service Providers. It provides a unified payment solution in a global economy. Lasigdi Pay Inc wants to rebuild its payment infrastructure with Intelligent Payment Routing capability to reduce the rate of failed payment transactions.

Below are the functional and non-functional requirements:

  • Ability to route merchant payments globally to preferred Payment Service Provider (PSP) for the primary market of the targeted Issuing Bank of the cardholder.
  • Ability to route merchant payments to the best Payment Service Provider (PSP) with the best merchant and low-cost fee.
  • Ability to route merchant payments according to merchant preferred Settlement and Acquiring bank.
  • Ability to route the merchant payment by a set of business rules.
  • Reactive, and Scalable System that can process transactions with high throughput and low latency.
  • Distributed across multiple nodes in clusters for high availability and scalability.
  • The acceptable latency of the system is 20-35ms for sending a request from Merchant to our Merchant Gateway endpoint service

With the functional and non-functional requirements above we need to build flexible payment architecture that has Intelligent Payment Routing capabilities and the same time very scalable.

In any payment system, routing can be thought of as the nervous system in the payments ecosystem, sending transactions to Acquirers and Payment Service Providers (PSPs) for processing and to ensure maximum payment success rate, it is one of those critical ‘behind the scenes’ processes that has a significant impact on scalability, customer experience, and a merchant's bottom line and revenue growth. An Intelligent Payment Routing Engine must adopt an event-driven and distributed architecture design pattern for scalability and high availability.

To build a concurrent event-driven distributed solution at scale, Kafka, Golang, and Akka framework are a perfect combination. Akka Cluster provides great support for the creation of distributed applications. Akka benchmark performance is at 50 million messages per second on a single machine that is mind-blowing!. You can read more about the performance benchmark in link Let It Crash.

In our own use-case, each Payment Service Provider will have its own cluster of worker nodes that will be processing our transaction for maximum scalability and high throughput. Akka Cluster is great for building reliable payment distributed systems.

Capacity Estimation and Constraints

  • Let’s assume we have 100 Million transactions hit per day, with 2 million transactions per Merchant daily.
  • 10000 transaction per seconds from the merchant via our Merchant API Gateway Service
  • We integrate to many Payment Service Providers (PayPal, Stripe, AliPay, Adyen, etc) across multiple regions from North America, Europe, Middle East, Africa, Asia, globally.
  • 10 worker nodes per cluster per Payment Service Provider or Acquirer to process our transaction and we scale more nodes as we have more hits.
  • Each Payment Service Provider will have a Kafka Topic with N number of partitions more than the nodes in the cluster to distribute the load among the Kafka topic partitions for high availability.

High-Level Architecture And System Design

At a high-level, we can have two possible scenarios of how transactions are sent to our orchestration engine from the merchant endpoint, one via Merchant API Gateway or from FTP for batch transactions ingestion. For simplicity, we will design for scenario one, merchant API gateway service. We validate and ingest the data to Kafka, our Routing Engine consumes and route the message to dedicated Kafka topic for each of the Payment Service Providers Akka cluster nodes using Cluster aware router to distribute workload across multiple worker nodes in the cluster.

The diagram below shows the high-level architecture of our solution.

No alt text provided for this image

Let dive more into the details of sub-systems.

Sub-Sytems Design

Merchant Gateway Service: Merchant API gateway service that is the single entry point for all incoming payment from the merchants. It is implemented in Golang. The API gateway handles requests using RESTful service and fans out the request to concurrent Golang goroutine service which publishes the message to Kafka for onward processing to Routing Engine. A goroutine is a lightweight thread of execution, each request is handled by a dedicated goroutine handler. All incoming requests sent to goroutine for faster processing. The Golang worker-pool fanout incoming request on to several concurrent executions. It gives us high throughput and low latency response time. Each request has an average response time of 35ms. It uses a high-performance goroutine pool library called ant. You can check out ants on the link written by Andy Pan. It is an open-source library.

No alt text provided for this image

Below are code snippets from merchant gateway service implementation.

package api

import (
   "bigdataconcept/fintech/intelligent/payment/routing/gateway/domain"
   "bigdataconcept/fintech/intelligent/payment/routing/gateway/infracstructure"
   "encoding/json"
   "fmt"
   "github.com/panjf2000/ants/v2"
   log "github.com/sirupsen/logrus"
   "io/ioutil"
   "net/http"
)

type RequestResponse struct {
   incomingRequest *domain.MerchantPaymentRequest
   outgoingResponse        chan *domain.MerchantPaymentResponse
}

type MerchantGatewayService struct {
   RequestProcessorPool *ants.PoolWithFunc
}

func NewMerchantGatewayService(poolSize int, publisher *infracstructure.KafkaPublisher) *MerchantGatewayService {
   pool, _ := ants.NewPoolWithFunc(poolSize, func(payload interface{}) {
      requestResponse, ok := payload.(*RequestResponse)
      if !ok {
         return
      }
      if err, reply := infracstructure.ProcessRequest(requestResponse.incomingRequest, publisher); err != nil {
         log.Error(err)
         requestResponse.outgoingResponse <- &domain.MerchantPaymentResponse{ResponseMessage: "Internal Server Error Try again",ResponseCode: "99",TransactionRef: "XXXXXXXXXX"}
      } else {
         requestResponse.outgoingResponse <- reply
      }
   })
   return &MerchantGatewayService{RequestProcessorPool: pool}
}

func (merchantGatewayService *MerchantGatewayService) SendMerchantPaymentRequest(w http.ResponseWriter, r *http.Request) {
   requestBody, err := ioutil.ReadAll(r.Body)
   fmt.Sprint(string(requestBody))
   if err != nil {
      log.Error(err)
      http.Error(w, "request error", http.StatusInternalServerError)
   }
   var merchantPayment = &domain.MerchantPaymentRequest{}
   if err := json.Unmarshal(requestBody, merchantPayment); err != nil {
      log.Error(err)
      http.Error(w, "request error", http.StatusInternalServerError)
   }
   defer r.Body.Close()
   requestResponse := &RequestResponse{incomingRequest: merchantPayment, outgoingResponse: make(chan *domain.MerchantPaymentResponse)}
   if err := merchantGatewayService.RequestProcessorPool.Invoke(requestResponse); err != nil {
      log.Error(err)
      http.Error(w, "throttle limit error", http.StatusInternalServerError)
      return
   }
   var response = <-requestResponse.outgoingResponse
    responseMsg, _ := json.Marshal(response)
   w.Header().Set("Content-Type", "application/json")
   w.Write(responseMsg)
}


Payment Routing And Orchestration Engine: Dynamic payment transaction routing also makes it possible to switch transactions to another provider should a payment processor experience an outage or rate of transaction decline is high. Routing also allows for better costs-saving, because merchants can set rules that dictate routing via the lowest cost payment service provider.

No alt text provided for this image

The intelligent routing can be implemented using data insight by an Artificial Intelligent Machine Learning model or business rule framework like Drools etc.

Using the Artificial Intelligent Machine Learning model, we keep track of transaction data and build a machine learning model from the historical transaction and payment processing data. The machine learning model can be exposed via API that can be consumed by the Routing Engine to route the payment to the designated Payment Service Provider or Acquirer where we have a maximum payment success rate. For simplicity, we design our routing engine using the Drool business rule.

Drool business rule implementations engine provides stateless execution of decision logic. It uses the G-Rule Golang library to implements, you can read more about GRule @ link GRule Golang. The routing engine is base on the configuration. For a given decision execution request, an incoming merchant payment request from Kafka consumer is sent to the rules engine, and a selected rule is applied to executes the request. When the process is completed, the rules engine returns the Payment Service Provider channel. The payment service provider channel represents a destination Kafka Topic. The outbound payment is published to the destination Kafka topic for onward processing by payment processor Akka cluster engine.

Below are the snippets of the Grule configuration file

rule RoutePaymentByCardTypeRule1 "When CardType is VISA." salience  1 {
    when
        CardInfo.CardType  == "VISA"
    then
        CardInfo.PSPChannel = "StripeChannelTopic";
         Retract("RoutePaymentByCardTypeRule1");

}
rule RoutePaymentByCardTypeRule2 "When CardType is Master." salience  1 {
    when
        CardInfo.CardType  == "MasterCard"
    then
        CardInfo.PSPChannel = "StripeChannelTopic";
         Retract("RoutePaymentByCardTypeRule2");
}

rule RoutePaymentByIssueBankCountry1 "When Issuing Bank Country is Canada." salience  1 {
    when
        CardInfo.IssueCountry  == "Canada"
    then
        CardInfo.PSPChannel = "StripeChannelTopic";
         Retract("RoutePaymentByIssueBankCountry1");
}

rule RoutePaymentByIssueBankCountry2 "When Issuing Bank Country is Nigeria." salience  10  {
    when
        CardInfo.IssueCountry  == "Nigeria"
    then
        CardInfo.PSPChannel = "StripeChannelTopic";
         Retract("RoutePaymentByIssueBankCountry2");
}

The rule engine uses a factory pattern to determine which implementation to be used at runtime. We can either load Machine Learning Model or Drool business rule implementation at runtime by passing configuration rule type on application startup.

below is the code snippets are written Golang rule factory pattern.

package rule


import "bigdataconcept/fintech/intelligent/payment/routing/domain"

type RuleType int
const (
   MachineLearningRule RuleType = 1 << iota
   DroolBusinessRule
)

type RoutingRuleService interface {
   ExecutePaymentRoutingRule(merchantPayment *domain.MerchantPaymentRequest) (string, error)
}


func NewRoutingRuleService(rule RuleType) RoutingRuleService {
   switch rule {
   case DroolBusinessRule:
      return NewDroolRoutingRule("cardPaymentRoutingRule.grl")
   case MachineLearningRule:
      return NewMachineLearningRoutingRule("MLPModel path")
   default:
      return NewDroolRoutingRule("cardPaymentRoutingRule.grl")
   }
}


Payment Processor With Akka Cluster: To handle over 100 million transactions hit per day. We need to build our payment processing engine using a reactive architectural style. Reactive architecture patterns allow us to build self-monitoring, self-scaling, self-growing, and self-healing systems that can react to both internal and external conditions without human intervention. We want to process merchant payment transactions using a work distribution strategy for scalability and high availability.

No alt text provided for this image

The processing engine is implemented using Akka Cluster aware router. Router actors can handle the logistics of sending messages to other actors that may be distributed across the cluster. A router actor receives messages, but it does not handle the message itself. It forwards the message to a worker actor. These worker actors are often referred to as routees. The router actor is responsible for routing messages to other routee actors based on a routing algorithm. In our case, we use routing algorithms round-robin strategy.

The outbound merchant payment requests are consumed from the Payment Service Provider dedicated topic channel from Kafka topic partitions. The Kafka consumer is implemented using Competing Consumers Pattern, multiple concurrent consumers can process messages using a dedicated Kafka topic partition channel. This enables a system to process multiple messages concurrently to optimize throughput, to improve scalability and availability, and to balance the workload.

Below is a code snippet of Kafka Subscriber written in Scala.

package com.bigdataconcept.payment.processor

import akka.Done
import akka.actor.{Actor, ActorLogging, ActorRef, Props}
import akka.kafka.scaladsl.Consumer
import akka.kafka.{ConsumerSettings, Subscriptions}
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.Sink
import com.bigdataconcept.payment.domain.PaymentDomain.MerchantPaymentRequestCommand
import com.google.gson.Gson
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer

import scala.concurrent.{ExecutionContext, Future}

object OutboundMerchantPaymentKafkaSubscriber{

  def props(kafkaTopic: String, kafkaConsumerGroupId: String, kafkaConsumerInstanceGroupId: String, paymentRouter:ActorRef)(implicit  mat: ActorMaterializer): Props = Props(new OutboundMerchantPaymentKafkaSubscriber(kafkaTopic,kafkaConsumerGroupId,kafkaConsumerInstanceGroupId,paymentRouter))
}



/**
 * @author Oluwaseyi Otun
 *         OutboundMerchantPaymentSubscriber is a kafka consumer
 *
 */

class OutboundMerchantPaymentKafkaSubscriber(kafkaTopic: String, kafkaConsumerGroupId: String, kafkaConsumerInstanceGroupId: String, paymentRouter: ActorRef)(implicit val mat: ActorMaterializer)  extends  Actor with ActorLogging{

  implicit val dispacter: ExecutionContext = context.system.dispatcher


  override def preStart(): Unit = {
    log.info("Start consuming Merchant Payment Event From kafka {}", kafkaTopic)
    startMerchantPaymentConsumerEvent()
  }

 def receive: Receive = {
      case _ => "Nothing to Do"
    }



  def startMerchantPaymentConsumerEvent(): Unit = {
    log.info("Start Consuming Merchant Payment Events from Kafka Topic {} GroupId {} ", kafkaTopic, kafkaConsumerGroupId)
    val consumerSettings = ConsumerSettings.create(context.system, new StringDeserializer, new StringDeserializer)
      .withGroupId(kafkaConsumerGroupId)
        .withGroupInstanceId(kafkaConsumerInstanceGroupId)
    Consumer.plainSource(consumerSettings, Subscriptions.topics(kafkaTopic)).mapAsync(5)(sendMerchantPaymentRequestToPSPActor)
      .runWith(Sink.ignore)
  }

   def sendMerchantPaymentRequestToPSPActor(event: ConsumerRecord[String, String]): Future[Done] = {
   val payload = event.value()
    log.info("Handle  Incoming Merchant Payment Payload Event {}  ", payload )
     val merchantPaymentCmd = new Gson().fromJson(payload,classOf[MerchantPaymentRequestCommand])

     paymentRouter ! merchantPaymentCmd
     log.info("Merchant payment Sent to Payment Service Provider Worker Actor Pool {} Payment Reference {}  ", merchantPaymentCmd,merchantPaymentCmd.paymentRequest.merchantPaymentReference )
     return Future.successful(Done)
  }
}

The router actor uses a Round-Robin strategy act as a load balancer, it spreads the messages evenly to all available nodes within the clusters. There are three or more worker nodes per cluster. We can scale up and down by increasing the nodes in the cluster. Each Payment Service Provider has its own dedicated Akka cluster. Akka Cluster lends itself naturally to a high availability scenario.

Below is the sequence diagram of the Payment Processor workflow.

No alt text provided for this image

Each Payment Service Provider implements the IPaymentService Service Provider interface. An interface that defines the contract for the service provider implementation classes. In our case, we have a service provider interface that defines a method for sending payment to Service providers.

The code snippets are written in Scala.

package com.bigdataconcept.payment.service.provider

import com.bigdataconcept.payment.domain.PaymentDomain.MerchantPaymentRequestCommand

trait IPaymentServiceProvider {

  def sendPaymentRequestToPaymentServiceProvider(merchantPaymentRequestCommand: MerchantPaymentRequestCommand) :Unit
}

The Service provider implementation class is an Akka Actor that actually implements the IPaymentServiceProvider. This implementation is per Payment Service Provider integration PayPal, Adyen, Stripe, etc. One of the fundamental principles about Akka is that it forbid blocking or synchronous call in an Actor when processing the message. As we already know blocking or synchronous call tired down thread and resources which results in scalability and performance issues. If you need to make a blocking call within Actor we use scala Future. By default, futures and promises are non-blocking, making use of callbacks instead of typical blocking operations. 

The future revolves around ExecutionContexts, responsible for executing computations. An ExecutionContext is similar to an Executor: it is free to execute computations in a new thread, in a pooled thread, or in the current thread. Every call to PSP API is wrapped around scala Future to avoid blocking calls.

Below is a code snippet is written in Scala.

def makeAPIToPayPal(merchantPaymentRequestCmd: PaymentDomain.MerchantPaymentRequestCommand): Future[PaymentResponseMessage] = {
  log.info("Sending Payment To BrainTree Gateway Payment Reference {}", merchantPaymentRequestCmd.paymentRequest.merchantPaymentReference)
  var responseGateway: Future[PaymentResponseMessage]  = null
  val merchantPaymentReference = merchantPaymentRequestCmd.paymentRequest.merchantPaymentReference
  val request = new PaymentMethodRequest()
  request.cvv(merchantPaymentRequestCmd.paymentRequest.card.cvc)
    .cardholderName(merchantPaymentRequestCmd.paymentRequest.card.cardHolder.name + " " + merchantPaymentRequestCmd.paymentRequest.card.cardHolder.surname)
    .expirationMonth(merchantPaymentRequestCmd.paymentRequest.card.expiryMonth)
    .expirationYear(merchantPaymentRequestCmd.paymentRequest.card.expiryDate)
    .number(merchantPaymentRequestCmd.paymentRequest.card.cardNumber)
 try {
   val response = Gateway.getGateWayInstance(config).paymentMethod().create(request)
    responseGateway = Future.successful(PaymentResponseMessage(responseCode = "00",paymentReference = merchantPaymentReference,responseMessage = ""))
   return responseGateway
 }catch{
   case x: AuthenticationException =>
     responseGateway = Future.successful(PaymentResponseMessage(responseCode = "99",paymentReference = merchantPaymentReference,responseMessage = "Authentication Failed"))
     return responseGateway
   case x: AuthorizationException =>
     responseGateway = Future.successful(PaymentResponseMessage(responseCode = "99",paymentReference = merchantPaymentReference,responseMessage = "Authorization Failed"))
     return responseGateway
   case _ =>
     responseGateway = Future.successful(PaymentResponseMessage(responseCode = "99",paymentReference = merchantPaymentReference,responseMessage = "Failed"))
     return responseGateway
 }

}

With Akka, we can process massive concurrent requests simultaneously which gives us high throughput, scalability, and high availability without performance bottleneck.

The full source code can be downloaded on my Github repository below. Feel free to clone the repo modify, play around, and enhance the source code.

Takeaways

?Intelligent payment routing allows merchants to increase their global authorization rate through cascading and smart retries, lower the payment fee by determining the best PSP to handle a specific transaction straight away, and improve their user experience by decreasing payment failures and user manual retries. One of the major benefits of working with multiple providers is the ability to use cross PSP retries for failing transactions.

Akka provides many of the core non-functional requirements for payments processing “out of the box”. Akka by design, they are responsive, resilient, elastic, and message-driven. For a system to always be responsive, it must be both resilient and elastic. It is also possible to build systems that can be easily scaled up and down to handle peaks and valleys in system activity. The system can be scaled horizontally by adding additional partitions to the payment service provider Kafka topic and standup additional node under the heavy transaction loads for scalability. 

Golang is an ideal language for building massive concurrency solutions at scale with the excellent combination of concurrency, safety, and simplicity of programming.

In the fintech industry today, many organizations have already adopted Akka for processing at scale, e.g PayPal.

Paypal takes a billion hits a day with a system that might traditionally run on a 100s of VMs and shrink it down to run on 8 VMs, stay responsive even at 90% CPU, at transaction densities Paypal has never seen before, with jobs that take 1/10th the time, while reducing costs and allowing for much better organizational growth without growing the compute infrastructure accordingly. PayPal developed this system using the Actor model based on Akka.

PayPal told their story here: squbs: A New, Reactive Way for PayPal to Build Applications. They open source squbs and you can find it here: squbs on GitHub.

https://www.lightbend.com/blog/how-reactive-systems-help-paypal-squbs-scale-to-billions-of-transactions-daily

Thank you for reading.

Oluwaseyi Otun is a Backend Software Engineer and Data Engineer (Akka, Golang, Scala, Java). Big Data and Distributed system enthusiast with a special interest software architectural design pattern, microservice, Big Data, large scale distributed system in in-memory, streaming computing, and big data analytics. I love learning internals working on systems and exploring what happens under the covers. He lives in one of the Atlantic Province in Canada.

Nikolay Kudinov

Senior backend developer

4 个月

Why don't you store a payment in the database immediately upon receiving the request? By doing so, you can acknowledge receipt back to the user right away. This ensures that the payment data is preserved, preventing any potential loss.

回复
Olaseni Alabede

Product Strategy @ Visa

4 年

Great Article Seyi!

回复
Chikelue Oji

Senior Software Engineer, AI Enthusiast.

4 年

This is an awesome and valuable writeup. As a matter of fact, this touches my current work somehow, however, on a different technology stack. From the perspective of a global merchant, there are parts that we already have in place and some that we are looking at implementing to maximise the number of authorised payments and hence, the reduction of payment friction. However, I recommend that you place a clear and easy to follow readme on the Github repo for anyone wanting to test out your proof-of-concept.

回复
Hafis Bello, MBA, PhD

Digital Payments I Cards & Channels I Waqf I Agency Banking I Financial Inclusion I Non-Interest Finance

4 年

Good effort and analysis there, Seyi. Surprisingly, I did not know about lasgidipay and it's operations. Our operations have leveraged on other merchant payment providers rather than build one internally. As you rightly indicated, the decision to leverage is to ensure we focus operations on our core areas of business strength.? You may want to review activities of other payment providers particularly in the Nigerian electronic space and indicate possible areas of improvement. Also, the infrastructure provision contributes to some of the online payment failures. It is hoped that the new fibre optic project planned for wider areas across the continent will improve that performance. Good job and keep it up

回复
Adewale Azeez

Technical Director at Immaculate High-Tech Systems Ltd

4 年

Hi Seyi, this is an awesome write-up, an insight for anyone who wants to embark on a similar mission of developing a payment system. Nigeria and African in general remain an untapped crude oil when it comes to the use of technology especially software systems. Solutions like the one you defined above are still greatly under used here, most transactions still go through cash, I blame the government for this, in fact businesses with governments (at all levels) still get conducted through cash, infrastructures are not in place. Painfully, the government and people in the IT industries are losing so much opportunities in terms of revenue, I hope we will wake up one day and do the needful to realize this dream. Well done Seyi, I'm not surprised at your progress and success, please keep it up.

要查看或添加评论,请登录

Oluwaseyi Otun的更多文章

社区洞察

其他会员也浏览了