Dynamic Caching (Part 2)

Dynamic Caching (Part 2)

In the last episode, we discussed two types of caching that can be used in a web application:

  1. Static Caching: This is where static assets, such as HTML, JavaScript, and images, are cached because they rarely change.
  2. Dynamic Caching: This deals with caching data that changes less frequently, such as user account information or pricing data, but can still change at any given point.

?? https://www.dhirubhai.net/pulse/caching-exploring-five-key-layers-modern-web-application-joel-ndoh-en2zf/

Now, let’s do a deep dive into dynamic caching.


Where Caching Occurs in a Web Application

In a web application, caching can be implemented at several key layers. If you haven’t encountered these before, let me break down the five layers of a web application where caching can occur:

  1. Browser: The client-side cache where assets like static HTML files and images are stored to improve loading speed for repeat visits.
  2. CDN (Content Delivery Network): Often used for static data like images, scripts, and style sheets, CDNs cache content close to users to reduce latency.
  3. Load Balancer: Can include caching at a level where frequently accessed data is stored and distributed across servers.
  4. Server-Side Cache: This is where application-level caches reside, caching responses or parts of responses to reduce the load on the server.
  5. Database Cache: Cache layers that store query results or commonly accessed database entries to reduce load times.

Image Caption: "A diagram showing the 5 layers of caching in a web application: browser, CDN, load balancer, server, and database."

If you want a more detailed explanation of these layers, check out the previous episode [linked above].


Dynamic Caching

Dynamic caching is typically applied to services handling data that can change, but not too frequently. For instance, user account information or exchange rates could be cached because they don’t change often, though the data is not completely static.

Two Approaches to Dynamic Caching:

  1. Exclusive Cache:

Definition: Each instance of a service holds its own cache. This can be useful for relatively small amounts of data that don’t need to be shared across different instances of your service.

Example: Imagine caching currency conversion rates. Each running instance of the service would store the conversion rates in its local cache. If a user requests this data, the service would provide the cached rate from its own memory.


An illustration showing multiple service instances, each with its own cache (exclusive caching)

Pros:

  • Fast retrieval with no external service calls.

Cons:

  • Data duplication across all service instances, which can lead to memory inefficiency.
  • Scalability issues as the number of instances grows.

Workaround: Use intelligent routing to direct requests to the instance that holds the cached data, though this can affect scalability.


2. Shared Cache

  • Definition: A centralized cache shared by all service instances, typically deployed as a caching service like Redis or Memcached.


A diagram showing multiple service instances accessing a shared cache service.

Pros:

  • No data duplication; a single cache source for all instances.
  • Easier to scale and highly efficient for large datasets.

Cons:

  • Some performance trade-offs due to additional time required for service instances to request cached data.


Key Factors to Consider Before Caching Data

Before implementing caching, it’s essential to assess the following:

  1. Frequency of Access: The more often a piece of data is accessed, the more it makes sense to cache it.
  2. Size of Cached Data: Cache space is limited, so it's best to cache smaller chunks of frequently accessed data.


A checklist diagram highlighting the key factors for caching decisions: frequency of access and data size

Caching Challenges

  1. Limited Cache Space: Cache space is finite, leading to early evictions of data if the cache fills up quickly. Cost is often a limiting factor here.
  2. Cache Invalidation or Inconsistency: This is when the cached data is no longer in sync with the source of truth. One common solution is to invalidate the cached data upon updates, though this can be complex for caches in public proxies like CDNs, where we don't have full control.

  • Solution: Set a Time-to-Live (TTL) for cached items, which ensures that stale data is automatically removed after a certain period.


A flowchart illustrating cache invalidation using TTL (Time-to-Live).

This deep dive into dynamic caching showcases both the challenges and the benefits of caching dynamic data in a web application. By understanding these different layers and approaches, you can ensure your web application performs efficiently while maintaining scalability and flexibility.

要查看或添加评论,请登录

Joel Ndoh的更多文章

社区洞察

其他会员也浏览了