Event Driven Architecture for Web3 Enterprise dApp Design

Event Driven Architecture for Web3 Enterprise dApp Design

From Legacy Systems to Decentralized Networks

Encouraging Knowledge Exchange Between Legacy Web2 Engineers and Web3 Devs to Harmonize Development Cultures

In conventional Web2 enterprise solutions, system architects influence the design of micro-services. To keep systems scalable and decoupled, they often implement a separate database per service. This ensures service autonomy, scalability, and easier maintenance, though it introduces challenges in data consistency and coordination.

As fintech leaders like 萬事達卡 , PayPal , Stripe , along with asset management giants like BlackRock adopt crypto stablecoins and migrate to Web3, the integration of data sources becomes increasingly layered and distributed. [Source 1, 2, 3, 4 ]

As more industry giants join the Web3 movement, dApps are evolving beyond their initial use for startup MVPs. They are now adopting enterprise architectural best practices to meet the complex requirements of large-scale, high-performance businesses.

The challenge arises when integrating off-chain databases and on-chain data sources to function as a unified application (dApp). Syncing data between these different layers can lead to issues, as Web2 systems typically record and log user actions in real-time. However, on-chain modules may experience delays, since blockchain transactions need time for verification, processing, and commitment. This discrepancy can cause synchronization issues between off-chain and on-chain data.

Event-driven architecture (EDA) bridges the gap between Web2 and Web3 by offering a familiar, scalable pattern for both legacy system architects and Web3 developers. As enterprises adopt Web3, EDA becomes a common ground, enabling both groups to design scalable, flexible, and consistent systems that can handle the complexities of modern, decentralized applications.

Scenario

Consider a decentralized application (dApp) for a Decentralized Autonomous Organization (DAO), where new members are enrolled through a voting process involving existing members. This governance is managed by smart contracts on the blockchain, where proposals and votes are recorded on-chain. However, because smart contracts are deployed on a public chain, sensitive data, such as member's "Know Your Client" (KYC) information, should not be stored on-chain. To address this, the dApp may include a Web2 backend layer that manages user credentials, profiles, and custodial wallet operations.

Additionally, smart contracts often involve various transactions, and enterprise system administrators may prefer not to manually search for member-related proposal transactions using third-party chain explorers. A best practice in the industry is to aggregate data from different sources and store it in a self-owned database, following a defined retention policy. In this scenario, the DAO dApp may have a DAO-Service node within the Web2 backend, which stores only the proposal data and references to on-chain transaction hashes, enabling easier filtering and efficient auditing.

Fusion of Solidity Event and RabbitMQ Event at dApp

Asynchronous Web3 on-chain and off-chain data sync in Publish/Subscribe Model

From the scenario section we can assume that a dApp may not be just a frontend and smart integrated application. It may have dedicated web2 backend services as well. From the Figure 1 it ca be seen that there are three different services where each have its single responsibility to operate under the dApp.

  1. Audit Trail Service: This service manages background tasks for real-time on-chain event monitoring and executes queries via the SubGraph event indexer using Cron Jobs. Its processes ensure synchronization with Solidity smart contract events from platforms like Alchemy RPC blockchain node provider and The Graph 's event indexing service. The service collects and processes on-chain events, formatting them into Web2 DTOs before publishing the events to other Web2 subscriber services through an event bus or exchange. Although it’s possible to enable all services to listen for on-chain event calls to synchronize, there are several limitations and fault tolerance issues. One challenge is the redundant use of background tasks in each service. Another issue arises when an RPC-based on-chain event call is missed by a service, which can disrupt the synchronization of distributed databases. In contrast, in a Web2 pub/sub model, messages are queued and will remain in the queue until the subscriber processes the event, ensuring off-chain and on-chain data synchronization of all services.
  2. DAO Service: While new membership proposal are directly signed by the proposer's wallet from the frontend, the submission process should not block the user experience, as on-chain transaction may take time to process. To handle this asynchronous task, a dedicated service (1.Audit Trail Serivce) is used. For traceability, the DAO Service can store the on-chain proposal's transaction hash as a receipt. Once the transaction is successfully completed, Solidity smart contract events will notify the Audit Trail Service, which will then re-transmit the information as a Web2 event to other Web2 services. This allows the DAO Service to later retrieve relevant details, such as the on-chain proposal ID and voting period, and update its off-chain database using the transaction hash as a reference.
  3. User Management Service (UMS): Similarly, when a pending user profile is created in the off-chain database, the user's on-chain proposal ID for membership cannot be assigned immediately. At the off-chain level, this column remains null. As the Audit Trail Service performs the asynchronous task, it will sync the data using Solidity events. Like the DAO Service, the UMS will also receive Web2 event calls from the Audit Trail to update the on-chain proposal id column at off-chain DB, as both the DAO and UMS subscribes events from the same event bus or exchange.

Here in Figure 1 it can be seen the Both DAO and UMS are subscribed to a common exchange. When Audit-Trail-Service is publishing an event it is first received by an exchange, which then routes the message to one or more queues based on routing rules. In our case DAO and UMS both are getting this event call. [Official Documentation Reference]

Figure 1: On-Chain Event Data are re-transmitted among Web2 micro-services in publisher/subscriber pattern to sync on-chain and off-chain data.

Synchronous dApp Authorization Check in Web2 Producer/Consumer Model

Web3 doesn't replace Web2, it extends existing internet technology by adding decentralized functionality and ensuring immutability.

Traditional authentication and authorization methods are still needed in many cases, such as with custodial wallets in business environments. These wallets allow employees to perform on-chain transactions on behalf of the organization. As shown in Figure 2, the Web3 Proxy service abstracts Smart Contract functions and holds the organizational wallet secret to manage exceptions and interact with Ownable contracts (ERC-173). For such enterprise use cases, custodial wallets are necessary, making JWT-based authentication a key technology in the Web3 ecosystem for dApps.

Figure 2: Authentication in REST API and Authorization in Message Queue for inter-service RBAC

Here in Figure 2 it can be seen that a Client request is initiated from the frontend which is a login request to "User Management Service" (UMS) and after authentication the response provides a JWT authorization token. With this token when a user tries to access other services this token is passed through cookie and every service delegate these service to UMS in message pattern (TCP/AMQP) and in response these service get an acknowledgement for Role Based Access Control.

Why Use Events and Messaging Instead of REST API for Inter-Service Communication?

While there is no strict limitation on using REST APIs for inter-service communication, it may not be the most efficient choice for certain tasks.

Honestly, it’s like trying to fit a square peg in a round hole — you're just not using the right tool for the job!
It goes into the squire hole girl meme. A popular meme where a frustrated girl reacts to a shape matching toy where every shape can be passed through a single shape.
It goes into the squire hole girl meme

  • One of the core principles of REST is the Client-Server pattern, which is designed for handling larger payloads efficiently. However, in event-driven architectures, where the focus is on transmitting small data like status changes. Since event-driven architectures typically operate with the prerequisite of having a message broker server like RabbitMQ . It is a good trade-off to use to have a dedicated queue for authorization as the payload is just a fix length JWT token.
  • Additionally, each POST, PUT, or PATCH request in REST holds a thread for the HTTP call, which can lead to inefficiencies and strain on the system’s architecture.
  • In contrast, message brokers uses TCP protocol. They queue and process messages without locking threads for each one.
  • Reliability is crucial in system communication. While REST APIs are stateless, which can lead to synchronization issues if a REST call fails during inter-service communication, message queues and event exchanges ensure message delivery. These systems are designed to automatically retry message delivery if the consumer or subscriber does not acknowledge receipt. Even if a service experiences downtime, messages are queued and processed upon the service's return, ensuring synchronization with other services.
  • REST relies on tightly coupled communication between services, meaning each service must know about others. Event-driven systems or message brokers decouple services, allowing them to function independently and enabling easier changes or additions without disrupting the entire system.

Relying too heavily on REST APIs for inter-service communication can quickly lead to chaos. Just imagine your developer opening Swagger and seeing something like this:

A visual representation of a chaotic inter-microservice communication with REST API.

Summery

In the previous section, we discussed both event and message patterns. The choice of pattern depends on the specific scenario. When multiple nodes rely on a single entity, such as in authentication (Figure 2), and tasks need to be performed synchronously, we recommend using a Message Queue in a single consumer, multi-producer model. On the other hand, for asynchronous background task dependencies, an Event Bus or Exchange is more suitable, where a single Publisher triggers events that multiple Subscriber use to sync their databases in a Fire-and-Forget model.

Blockchain networks, which transmit Solidity Events through a single RPC endpoint, are a perfect fit for this approach, enabling multiple Web2 subscriber to listen to events. This aligns well with the event-driven architecture of Web3 dApps.

Since Web2 and Web3 use different Data Transfer Object (DTO) structures and belong to separate layers (as shown in Figure 1), we propose an "Audit Trail Service" to coordinate between these layers. This service will convert Web3 DTOs to Web2 events, enabling seamless communication between Web2 services from various sources such as Alchemy / Infura or The Graph .


Reviewed by: Md. Aminul Islam , Sampad Sikder , Yamin Raad


Learn More About Solidity Event

Experimental Package: @golevelup/nestjs-rabbitmq

Experiment Implementation: Open-Source Project FinCube23 by Brain Station 23


要查看或添加评论,请登录

Md. Ariful Islam的更多文章

社区洞察