When Backend for Frontend and GraphQL Add More Complexity Than Solutions
Rodrigo Estrada
Master of Science Distributed and Parallel computing | Data Engineering | Platform Engineering
In the fast-paced realm of application development, innovative solutions like Backend for Frontend (BFF) and GraphQL have emerged to address distinct challenges. Yet, it's essential for IT professionals to carefully assess if the complexity introduced by these solutions is truly warranted for their specific projects.
Root Problems and Intricate Solutions
In the context of a single API call from Chile to a server located in Virginia (where major cloud providers like GCP, AWS, or Azure host their services), the interaction isn't as simple as a single roundtrip taking 140ms. Instead, the TCP protocol involves several stages, adding to the total latency. Initially, a TCP handshake occurs, involving a SYN message from the client, an ACK/SYN from the server, and a final ACK from the client. This handshake alone constitutes three steps (SYN, SYN-ACK, ACK) and, therefore, three roundtrips, effectively tripling the base latency to 420ms before any actual data is transferred. After the handshake, the actual HTTP request and response occur, each adding their roundtrip. In this scenario, even a single API call involves multiple roundtrips, significantly exceeding the base latency of 140ms. The latency compounds further with 3-5 such calls, as each call repeats these stages, demonstrating the performance impact of TCP's protocol overhead in HTTP/1 interactions. This starkly contrasts with newer protocols like HTTP/2, where multiple requests and responses can be interleaved over a single connection, mitigating such latency issues.
While the TCP protocol inherently introduces multiple stages and consequent latency in a single API call from distant locations, modern web browsers and frameworks offer some relief. Browsers are capable of making multiple concurrent requests, mitigating the impact of latency to some degree. Additionally, web frameworks, like React, employ asynchronous loading techniques, fetching and rendering page components in chunks rather than waiting for the entire page to load. This asynchronicity in loading and the ability to handle multiple concurrent requests help in offsetting the latency introduced by the TCP handshake and HTTP/1's limitations, enhancing user experience by making web interactions feel more responsive and efficient. This approach, however, doesn't eliminate the underlying latency issues but provides a more optimized way to handle data transfer and rendering in web applications.
Originally, the BFF pattern was crafted in response to the limited bandwidth era and the constraints of HTTP/1, aiming to tackle three fundamental issues:
The concept of 'Forced Interdependency' between frontend and backend teams, often cited as a bottleneck in development processes, is arguably an artificial problem. If two distinct teams are created, one for frontend and another for backend, the implication is that this division was intentional, rooted in a strategic decision to optimize performance, scalability, and security. The frontend waiting on the backend for new data or formats should not inherently be seen as an issue. It's a part of a well-thought-out system design, aiming to leverage the specialized skills of each team effectively. If such a separation leads to perceived bottlenecks, it might be more indicative of a need to refine the collaboration process or communication channels, rather than a fundamental flaw in the concept of divided responsibilities. After all, the division is meant to streamline the development process, not hinder it.
While mechanisms like ODATA presented partial remedies, notably in filtering responses to alleviate the first problem, the multifaceted challenge of excessive endpoint interactions persisted.
Microservices and BFF: A Paradoxical Duo
Microservices architecture, synonymous with distribution and modular design, faced a paradox with the advent of BFF. The introduction of BFF, acting as a service aggregator, seemingly contradicted the decentralized ethos of microservices. This centralization inadvertently forged a singular failure point, amalgamating the shortcomings of monolithic structures with the intricate nature of microservices, thereby introducing performance bottlenecks and complicating system management.
One viable solution to the paradox of centralization in a microservices architecture is the concept of federation. This approach allows each microservice to apply its own filters and delegate the responsibility of data aggregation to a selected microservice, thus aligning with the decentralized ethos of microservices. Interestingly, even GraphQL acknowledges and addresses this challenge by supporting federation, providing a workaround to mitigate the centralization issue. This federated approach ensures that while each microservice operates independently, they can collectively contribute to a cohesive data response, effectively balancing autonomy with integration. Furthermore, some API gateways, like Kong, enhance this model by offering GraphQL caching capabilities, optimizing performance and resource utilization, and thus, elegantly bridging the gap between the granular control of microservices and the unified data view expected in a BFF architecture.
GraphQL: The Theoretical Promise vs. Practical Realities
GraphQL emerged as a progressive evolution of the BFF, granting frontend teams the flexibility to execute highly tailored queries. Nonetheless, this flexibility is not without its complexities, especially when handling intricate data sets without the robustness and efficiency of traditional SQL databases. Direct database access or leveraging caching solutions like Redis might offer a more streamlined and practical approach in many scenarios.
领英推荐
Technological Advancements: The Diminishing Role of BFF?
With the advent of technologies such as HTTP/2 and gRPC, the foundational problems that BFF aimed to solve are becoming less pertinent. Event-driven architectures and asynchronous technologies provide robust alternatives without necessitating over-engineering. Modern web and mobile clients, capable of managing local databases and caches, operate efficiently, questioning the necessity of a centralized service like BFF in today's tech landscape.
In the delineated realms of frontend and backend development, the freedom and flexibility for frontend teams to make API calls can be seamlessly integrated without the need for complex solutions like GraphQL. This synergy is achieved when the backend team furnishes a performance-tuned, denormalized data view, tailored for frontend consumption. Such a curated approach not only ensures data integrity and system performance but also empowers frontend developers with the autonomy to interact with data efficiently. It's a strategy that leverages the strengths of advanced SQL read-only replicas or caching systems like Redis, providing a streamlined, efficient, and secure framework for data access and manipulation, aligning both teams towards a cohesive and optimized application architecture.
Embracing Event-Driven Architecture (EDA), asynchronous operations, and intelligent clients like Firebase and CouchDB revolutionizes how applications handle data changes and queries. In this model, changes are made locally on the client-side, enabling a swift and responsive user experience. The client applications are not just passive receivers of data; they are equipped to perform local data manipulations, queries, and even complex operations. This local-first approach significantly reduces the dependency on constant server communication, alleviating server load and network latency.
The true power of this paradigm shines when it comes to data synchronization. Intelligent clients synchronize the local changes with the server database in an efficient, often seamless manner. EDA facilitates this by ensuring that events (like data changes) are captured and communicated across the system components, triggering necessary actions or updates. This model is not just about pushing data back and forth; it's about smartly handling data interactions, ensuring that data consistency is maintained, conflicts are resolved, and the system's state is always synchronized across the client and server.
Systems like Firebase and CouchDB (PouchDB) exemplify this approach. They provide robust frameworks for managing data synchronization, conflict resolution, and ensuring data integrity across distributed systems. The local changes made by a client are synchronized with the server database, and updates from the server or other clients are received and integrated into the local database, keeping the user's view fresh and consistent. This architecture significantly enhances the application's performance, scalability, and user experience by leveraging local computation and minimizing the reliance on continuous server connectivity.
Alternatives to GraphQL and BFF
While GraphQL and Backend for Frontend (BFF) provide robust solutions for data management and API structure, there are several other technologies and architectures offering unique advantages and capabilities. Here's a brief overview of some notable alternatives:
Alternatives to GraphQL and BFF like ODATA, RestAPI with filters, Falcor, gRPC, trpc.io, Firebase, and CouchDB+PouchDB offer diverse approaches to handling data interactions, API design, and system architecture. These solutions vary in their complexity, flexibility, and use cases, providing developers with a range of options to best fit their
Exploring other technologies and architectures can provide unique advantages and capabilities for your projects. Below is an overview of some notable alternatives:
Each of these alternatives brings unique strengths and considerations. The choice among them should be guided by the specific requirements of your project, including factors such as data complexity, real-time needs, scalability, and development resources.
Final Reflection: Deliberate Before Implementing
For IT professionals, the allure of patterns like BFF and tools such as GraphQL is undeniable. However, it's imperative to judiciously consider these solutions within the context of your specific project needs. Weighing the benefits against the potential complexities and costs is crucial. In the architectural decision-making process, simplicity, clear objectives, and alignment with the project's context are paramount in determining the most suitable path forward.
Senior Site Reliability Engineer
1 年Interesante