In previous article, we have seen what is In-Memory cache and how it can be used in .NET Core web APIs. Let’s have a look at distributed caching in this blogpost.
What is Distributed Cache ?
As discussed in previous article, a memory cache is simple cache implementation which uses web server’s memory as cache. It means if the server is restarted or deployment is done or server crashes, then the cached items are gone.
Also, generally, a web application has multiple web servers, part of the web farm, so the only technique to make in-memory cache work is to use sticky sessions. Sticky sessions means one session always goes to one specific server behind the load balancer.
A distributed cache, as the name suggests, does not use web server’s memory as cache store. Instead, some other nodes can be used for storing cached data. Most common examples include
SQL Server database,
Azure Redis Cache,
Why to use Distributed Cache ?
Distributed caches ensure that the cached data can be accessed from any of the web servers.
Sticky sessions are not required anymore. Http requests can go to any server from the web farm. There would be some sort of connection between web farm and cache server. As long as this connection is available, any of the webserver can read the cached data.
Also, cached data survives application deployment or web server crashes (or restarts) as the cache store is on different node.
IDistributedCache – Quick Introduction
IDistributedCache is the central interface in .NET Core’s distributed cache implementations. This interface expects basic methods with any distributed cache implementation should provide:
- Get, GetAsync: to get an item from cache. It expects a string key as input parameter and it returns a byte if the object is found in cache.
- Set, SetAsync: to add a new item in the cache. It takes item (as a byte) and a string key as input parameters.
- Refresh, RefreshAsync: Refreshes items based on its string key, also resetting its sliding expiration timeout.
- Remove, RemoveAsync: to remove an item from cache based on a string key input parameter provided.
Framework Provided Implementations
There are four implementations provided by the framework:
- Distributed SQL Server Cache
- Distributed Redis Cache
- Distributed NCache Cache
- Distributed Memory Cache
There can be custom implementations (or third party implementations too) as long as they implement IDistributedCache.
Wait, MemoryDistributedCache ? Really ??
First three implementations are quite obvious, the cache stores used are
Redis Cache and
NCache respectively. But I was surprised to know that there is distributed memory cache implementation.
I was surprised to know about
Distributed memory cache and I was wondering what does it actually do. As per documentation, it is not an actual distributed cache. It uses web server’s memory as a cache store. It is just like IMemoryCache we have seen in previous article.
Why to use MemoryDistributedCache ?
I think it might be a useful tool to get started with distributed cache. When a new application development starts, we can setup the memory distributed cache and then use IDistributedCache interface to interact with cache store.
So, later in the cycle, application can just change some startup configurations to use any other real distributed cache. Actual functional classes are not required to be changed as they already would be using IDistributedCache.
Below is a simple code example which shows how to use memory distributed cache.
I hope you find this information useful. Let me know your thoughts.
You can download working code example by clicking from the link given below.