Caching
Prachi Gupta
Engineering @ Adidas | Java, Microservices, SpringBoot, Rest API, Kubernetes
Load balancing helps you scale horizontally across an ever-increasing number of servers, but caching will enable you to make vastly better use of the resources you already have.
Caching is a technique that involves temporarily storing data to avoid loading it more than once. Cached data is meant to be available in an instant to provide lightning-fast performance.
By caching data, you can improve your application’s performance while reducing network calls, database strain, and bandwidth usage. These benefits make it a great pattern to implement.
Caching consists of:
Caching is everywhere. Server-side, client-side, browser caching, and proxy/CDN caching are all opportunities for you to take advantage of the concept.
Caches in different layers
1. Client-side
2. DNS
3. Web Server
4. Application
5. Database
6. Content Distribution Network (CDN)
Types of caching Strategies
Your caching strategy depends on how your application reads and writes data. Is your application write-heavy, or is data written once and read frequently? Is the data that's returned always unique? Different data access patterns will influence how you configure a cache. Common caching types include cache-aside, read-through/write-through, and write-behind/write-back.
1. Write-Through cache
In Write-through caching, the application writes data to the cache and then to the database. The cache sit in line with the database, and the application treats them as the main data store. The cache is responsible for writing the data to the database.
Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale.
Pros: Complete data consistency, robust to system disruptions.
Cons:
领英推荐
2. Read-Through cache
Read-through caching is a strategy where data is read from the cache, and if the data is not found, it is automatically loaded from the data source and added to the cache.
The app doesn't interact with the database. But the cache does. In order words, cache is responsible for reading the data from the database.
And this is what makes it different from the cache aside pattern.
3. Write-Behind/ Write-Back cache
In write-behind, the application does the following:
Write-behind caches, sometimes known as write-back caches, are best for write-heavy workloads, and they improve write performance because the application doesn't need to wait for the write to complete before moving to the next task.
Cons:
4. Cache-Aside Pattern
Application is responsible for reading & writing from the database and the cache doesn't interact with the database as all. The cache is "kept aside" as a faster and more scalable in memory data store.
Memcached?is generally used in this manner.
Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
Cons:
References: