Creating a Redis cluster in Kubernetes

Creating a Redis cluster in Kubernetes

Redis is an open source in-memory data structure store. It is commonly used as a database, cache store, and message broker. It supports a variety of data structures such as strings, hashes, lists, sets. In this article, we will explore an approach to create a highly available and fault tolerant Redis cluster in Kubernetes

High availability:- In case one of the pods of redis go down, then other pods should be available to process the commands. The downtime should be minimised. We will use a master-follower setup for this.

Fault tolerant:- In case of a master-follower setup, master is the only component which can perform WRITE operations. So in case a master goes down - WRITE operations of the cluster can suffer. To tackle this problem, we will be using Proxies and Sentinels.

Sentinels: Sentinel are components which keep a watch on all redis instances (both master and followers) and keep a record of which redis instance is the master at any given point of time. So in case of any write operation, one must first ask Sentinels about which of the redis instances are master currently. And then send the WRITE operations to that instance. In addition to this, Sentinel also starts failover mechanism for the cluster without any human intervention. (for more info - https://redis.io/topics/sentinel)

Proxies: Application which use redis usually expect one hostname and a port where they can send redis related commands. Since in our present approach, only the sentinel know which of the redis instances is master currently, we use a proxy which will accept commands from your application, ask sentinels about the current master and send the commands to that address. So your application code will require no changes.

Refer to this github repo for the yamls: https://github.com/shishirkh/redis-ha

Architecture of the redis cluster:

No alt text provided for this image
  • Three redis instances are running as separate deployments. Each of them have their own clusterip service created. All communication between the instances is configured via the clusterip service. This is done so that in case a pod goes down (and it's IP address changes) communication is not affected.
--replica-announce-ip <service-name>
  • One of them is assigned as the master initially.
--slaveof <master-service-name> 6379

Creating the redis cluster:

  • Each of the redis instances need a persistent volume attached at "/data" in order to store the data in case a pod goes down.
kubectl apply -f redis/redis-instance-1.yml

kubectl apply -f redis/redis-instance-1.yml

kubectl apply -f redis/redis-instance-3.yml



Architecture of the sentinel cluster:

No alt text provided for this image
  • The architecture of the sentinel cluster is similar to the redis cluster.
  • Three sentinel instances are running as separate deployments. Each of them have their own clusterip service created. All communication between the instances is configured via the clusterip service. This is done so that in case a pod goes down (and it's IP address changes) communication is not affected.

About sentinel's configurations:

The way a sentinel works is that to initialize the sentinel, it needs certain things such as the redis cluster's master address. The sentinel's communicate with the master at start to get information about the follower instances of the redis cluster & also about the other sentinels present. The information about the redis cluster master and follower is required in case a failover operation has to be carried out. The information about other sentinel is required so that voting among the sentinels can be carried out.

A special value called the Quorum number is also required. Quorum number is the minimum no. of votes required before the sentinels can declare any redis master instance as down. Each sentinel has one vote. Ideally, Quorum number should be set as (n+1)/2 where n is the no. of sentinels in the cluster. So, here n is 3. and quorum number should be 2.

There are several other configurations. You can explore all of them here - (https://download.redis.io/redis-stable/sentinel.conf)

An important discussion

Whenever a failover happens, sentinel needs to remember which instance is the new master. The sentinels write this information on their configuration file so that in case their containers are restarted - they still remember who is the current master. To achieve this, several changes need to be done keeping in mind the two possible cases that can occur.

-When sentinel is starting for the first time: At this time, the configuration file does not exist and has to be created and mounted to a persistent volume. Can we use a configmap to mount the configuration file? No because configmaps attached to a pod cannot be edited from inside the pod. How then can we mount the configuration file on a persistent volume and also keep it editable? We could copy the configuration file to the persistent volume after the persistent volume gets mounted to the pod.

echo "$FILE does not exist."
echo "Copying to destination..."
cp /sentinel-conf-template/sentinel.conf /sentinel-conf-file/sentinel.conf
echo "...file copied. Making file writable..."
chown redis:redis /sentinel-conf-file/sentinel.conf
chmod +x /sentinel-conf-file/sentinel.conf
sed -i "s/\$REDIS_MASTER/$REDIS_MASTER/g" /sentinel-conf-file/sentinel.conf
sed -i "s/\$BIND_ADDR/$BIND_ADDR/g" /sentinel-conf-file/sentinel.conf
echo "...Done"

-When sentinel is started after being initialised once: At this time, the configuration file exists already and cannot be overwritten with new content. (because it has information about the current master) For this case, we can simply check. If the file already exists, then we should do nothing.

FILE=/sentinel-conf-file/sentinel.conf
if test -f "$FILE"; then
    echo "$FILE exists. Making files writable..."
    chown redis:redis /sentinel-conf-file/sentinel.conf
    chmod +x /sentinel-conf-file/sentinel.conf
    echo "...files are writable now. Done"

Combining both the cases to create the sentinel-entrypoint script.

#!/bin/sh


echo "Give permissions to the redis-sentinel configuration file directory..."
chmod -R 0777 $SENTINEL_CONFIG_FILE_DIR
echo "...permissions assigned."


FILE=/sentinel-conf-file/sentinel.conf
if test -f "$FILE"; then
    echo "$FILE exists. Making files writable..."
    chown redis:redis /sentinel-conf-file/sentinel.conf
    chmod +x /sentinel-conf-file/sentinel.conf
    echo "...files are writable now. Done"
else 
    echo "$FILE does not exist."
    echo "Copying to destination..."
    cp /sentinel-conf-template/sentinel.conf /sentinel-conf-file/sentinel.conf
    echo "...file copied. Making file writable..."
    chown redis:redis /sentinel-conf-file/sentinel.conf
    chmod +x /sentinel-conf-file/sentinel.conf
    echo "...files are writable now. Replacing placeholders with values..."
    sed -i "s/\$SENTINEL_QUORUM/$SENTINEL_QUORUM/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$SENTINEL_DOWN_AFTER/$SENTINEL_DOWN_AFTER/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$SENTINEL_FAILOVER/$SENTINEL_FAILOVER/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$REDIS_MASTER/$REDIS_MASTER/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$BIND_ADDR/$BIND_ADDR/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$SENTINEL_SVC_IP/$SENTINEL_SVC_IP/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$SENTINEL_SVC_PORT/$SENTINEL_SVC_PORT/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$SENTINEL_RESOLVE_HOSTNAMES/$SENTINEL_RESOLVE_HOSTNAMES/g" /sentinel-conf-file/sentinel.conf
    sed -i "s/\$SENTINEL_ANNOUNCE_HOSTNAMES/$SENTINEL_ANNOUNCE_HOSTNAMES/g" /sentinel-conf-file/sentinel.conf
    echo "...Done"
fi


echo "Starting redis sentinel process now."
redis-server /sentinel-conf-file/sentinel.conf  --sentinel

The sentinel.conf looks like this-

bind $BIND_ADDR
port 26379
dir /tmp
sentinel monitor mymaster $REDIS_MASTER 6379 $SENTINEL_QUORUM
sentinel down-after-milliseconds mymaster $SENTINEL_DOWN_AFTER
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster $SENTINEL_FAILOVER
sentinel announce-ip $SENTINEL_SVC_IP
sentinel announce-port $SENTINEL_SVC_PORT
sentinel resolve-hostnames $SENTINEL_RESOLVE_HOSTNAMES
sentinel announce-hostnames $SENTINEL_ANNOUNCE_HOSTNAMES

To build the sentinel image:

cd /sentinel/docker
docker build -t <image name> .

Creating the sentinel cluster:

  • Each of the sentinel instances need a persistent volume attached at "/sentinel-conf-file/" in order to store the data in case a pod goes down.
kubectl apply -f redis/redis-sentinel-1.yml
kubectl apply -f redis/redis-sentinel-2.yml
kubectl apply -f redis/redis-sentinel-3.yml

Testing out the sentinel cluster:

  • Identify the master in the redis cluster using these commands.
kubectl exec -it deployment/deployment-redis-instance-1 -- redis-cli -c info replication | grep role

kubectl exec -it deployment/deployment-redis-instance-2 -- redis-cli -c info replication | grep role

kubectl exec -it deployment/deployment-redis-instance-3 -- redis-cli -c info replication | grep role

  • Verify if each of the sentinels also identify the same master.
kubectl exec -it deployment/deployment-redis-sentinel-instance-1 -- redis-cli -p 26379 -c SENTINEL get-master-addr-by-name mymaster

kubectl exec -it deployment/deployment-redis-sentinel-instance-2 -- redis-cli -p 26379 -c SENTINEL get-master-addr-by-name mymaster

kubectl exec -it deployment/deployment-redis-sentinel-instance-3 -- redis-cli -p 26379 -c SENTINEL get-master-addr-by-name mymaster
  • If they both match, then sentinel cluster is working correctly.

Architecture of the proxy cluster:

No alt text provided for this image
  • The way the proxy would work is that it will ask any sentinel about which redis instance is the master. It will then redirect all the redis operations to that master.
  • As it can communicate with any sentinel, we use a headless service which has all the redis sentinels as endpoints.
  • As each instance of a proxy has the same purpose and to ensure high availability of the proxies - we create multiple instances of the proxy. We create a headless service with proxies as the endpoints. This is the service which the application is going to use for redis related operations.

Creating the proxy cluster:

kubectl apply -f proxy/proxy.yml

Testing out the proxy:

kubectl exec -it deployment/deployment-redis-instance-1 -- redis-cli -h svc-redis-sentinel-proxy -p 6379 -c info replication

Finally, the setup should look something like this...Our highly available and fault tolerant redis cluster is ready.

No alt text provided for this image

I hope you learnt something. Keep exploring.

Madhava Prasad

DevOps Engineer | Docker | Kubernetes | Prometheus | Grafana | Helm | Terraform | Ansible | Jenkins | AWS

2 年

Really nice article

回复

Hi Shishir Khandelwal, My application is not in containerised. It's plain java. Getting ?ERR unknown command `SENTINEL`, with args beginning with: `master`, `mymaster`,?. Could u pls share some sample client code to test it out? Does my app has to be containerised?

回复

Really nice article, kudos to you and thanks for sharing. A question on the proxy, is this actually needed if our Redis clients have Sentinel support? Or does it add something else?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了