Cache | System Design Part - 6

Cache | System Design Part - 6


In the last part of the SYSTEM DESIGN SERIES, we have gone through?HIGH-LEVEL DESIGN, which gave us a clear picture of how a web request traverse from client to database layer, In this part we will be focusing on Cache, which helps in reducing the response time of the web requests.

Cache is in-memory storage in which it will store results of frequently accessed data in the memory so that the response time for the upcoming requests will be drastically reduced

Every time a web request is made, there will be a couple of database calls that will be done to fetch the data, Having frequent database calls can impact the response time and user experience, Hence to mitigate this problem we are introducing Cache in the web layer

Cache Tier

Cache tier is a temporary storage layer that can provide a quick response compared to the database. some of the benefits of having a cache tier are?

  • System Performance.
  • it can be scaled independently.
  • It reduces the workload on databases.

No alt text provided for this image

Every time we get a request from the client or user, initially the webserver looks into the cache to check whether the response is there or not, If it's there, the call will revert back to the server with the data.

If the cache does not have a response for that particular request, it will query the database, store the response into the cache, and then send it back to the client.

This mechanism is called read-through cache.

Things to keep in mind before using Cache?

  • Cache is used if the data is frequently read and infrequently written. Since the Cache storage is volatile, all the data persisting should be done on the database.
  • Expiration policy: It is always advisable to have an expiration time when we are setting the data into the cache, It should be not very low [Which leads to frequent database calls] and also not very high [Which leads to stale data].
  • Consistency: The data stored in the cache should always be in sync with the database, There are high chances of inconsistency as data modification operations won't happen in the cache and database in a single operation.
  • When we are having multiple datacenters providing a consistency between the cache and database is challenging, in-depth understanding can be achieved from the article "Scaling Memcache at Facebook"
  • Eviction Policy: Whenever the cache memory is full and if some new request comes on top of it, In such cases we should remove existing records, this process is called cache eviction, Some of the widely used eviction policies are FIFO [First In First Out], LRU [Least Recently Used] and LFU [Least Frequently Used]

要查看或添加评论,请登录

Sai Srikanth Avadhanula的更多文章

社区洞察

其他会员也浏览了