每个Mater 都可以拥有多个slave.当Master掉线后,redis cluster集群会从多个Slave中选举出来一个新的Matser作为代替,而旧的Master重新上线后变成 Master 的Slave.
Statefulset
Service&depolyment
对于redis,mysql这种有状态的服务,我们使用statefulset方式为首选.我们这边主要就是介绍statefulset这种方式
ps: statefulset 的设计原理模型: 拓扑状态.应用的多个实例之间不是完全对等的关系,这个应用实例的启动必须按照某些顺序启动,比如应用的 主节点 A 要先于从节点 B 启动。而如果你把 A 和 B 两个Pod删除掉,他们再次被创建出来是也必须严格按 照这个顺序才行,并且,新创建出来的Pod,必须和原来的Pod的网络标识一样,这样原先的访问者才能使用同样 的方法,访问到这个新的Pod 存储状态:应用的多个实例分别绑定了不同的存储数据.对于这些应用实例来说,Pod A第一次读取到的数据,和 隔了十分钟之后再次读取到的数据,应该是同一份,哪怕在此期间Pod A被重新创建过.一个数据库应用的多个 存储实例
无论是Master 还是 slave都作为statefulset的一个副本,通过pv/pvc进行持久化,对外暴露一个service 接受客户端请求
因为k8s上pod是飘忽不定的,所以我们肯定需要用一个共享存储来提供存储,这样不管pod漂移到哪个节点都能访问这个共享的数据卷.我这个地方先使用NFS
来做共享存储,后期可以 选择别的替换
yum -y install nfs-utils rpcbind vim /etc/exports /usr/local/kubernetes/redis/pv1 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv2 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv3 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv4 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv5 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv6 0.0.0.0/0(rw,all_squash) mkdir -p /usr/local/kubernetes/redis/pv{1..6} chmod 777 /usr/local/kubernetes/redis/pv{1..6}
后期我们可以写成域名 通配符
启动服务 systemctl enable nfs systemctl enable rpcbind systemctl start nfs systemctl start rpcbind
创建6个pv 一会供pvc挂载使用
vim pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv1 spec: capacity: storage: 200M #磁盘大小200M accessModes: - ReadWriteMany #多客户可读写 nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv1" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-vp2 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv2" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv3 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv3" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv4 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv4" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv5 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv5" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv6 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv6"
字段说明:
apiversion: api版本
kind: 这个yaml是生成pv的
metadata: 元数据
spec.capacity: 进行资源限制的
spec.accessmodes: 访问模式(读写模式)
spec.nfs: 这个pv卷名是通过nfs提供的
创建pv
kubectl create -f pv.yaml kubectl get pv #查看创建的pv
因为redis的配置文件里面可能会改变,所以我们使用configmap这种方式给配置文件弄出来,我们后期改的时候就不需要没改次配置文件就从新生成一个docker images包了
appendonly yes #开启Redis的AOF持久化cluster-enabled yes #集群模式打开cluster-config-file /var/lib/redis/nodes.conf #下面说明cluster-node-timeout 5000 #节点超时时间dir /var/lib/redis #AOF持久化文件存在的位置port 6379 #开启的端口
cluster-conf-file: 选项设定了保存节点配置文件的路径,如果这个配置文件不存在,每个节点在启动的时候都为他自身指定了一个新的ID存档到这个文件中,实例会一直使用同一个ID,在集群中保持一个独一无二的(Unique)名字.每个节点都是用ID而不是IP或者端口号来记录其他节点,因为在k8s中,IP地址是不固定的,而这个独一无二的标识符(Identifier)则会在节点的整个生命周期中一直保持不变,我们这个文件里面存放的是节点ID
创建名为redis-conf的Configmap:
kubectl create configmap redis-conf --from-file=redis.conf
查看:
[root@rke ~]# kubectl get cmNAME DATA AGE redis-conf 1 22h [root@rke ~]# kubectl describe cm redis-confName: redis-conf Namespace: default Labels: <none> Annotations: <none> Data ==== redis.conf: ---- appendonly yes cluster-enabled yes cluster-config-file /var/lib/redis/nodes.conf cluster-node-timeout 5000 dir /var/lib/redis port 6379 Events: <none>
Headless service是StatefulSet实现稳定网络标识的基础,我们需要提前创建。准备文件headless-service.yml如下:
apiVersion: v1 kind: Service metadata: name: redis-service labels: app: redis spec: ports: - name: redis-port port: 6379 clusterIP: None selector: app: redis appCluster: redis-cluster
创建:
kubectl create -f headless-service.yml
查看:
[root@k8s-node1 redis]# kubectl get svc redis-serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-service ClusterIP None <none> 6379/TCP 53s
可以看到,服务名称为redis-service,其CLUSTER-IP为None,表示这是一个“无头”服务。
这是本文的核心内容,创建redis.yaml
文件
[root@rke ~]# cat /home/docker/redis/redis.yml apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: redis-app spec: serviceName: "redis-service" replicas: 6 template: metadata: labels: app: redis appCluster: redis-cluster spec: terminationGracePeriodSeconds: 20 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - redis topologyKey: kubernetes.io/hostname containers: - name: redis image: "redis" command: - "redis-server" #redis启动命令 args: - "/etc/redis/redis.conf" #redis-server后面跟的参数,换行代表空格 - "--protected-mode" #允许外网访问 - "no" # command: redis-server /etc/redis/redis.conf --protected-mode no resources: #资源 requests: #请求的资源 cpu: "100m" #m代表千分之,相当于0.1 个cpu资源 memory: "100Mi" #内存100m大小 ports: - name: redis containerPort: 6379 protocol: "TCP" - name: cluster containerPort: 16379 protocol: "TCP" volumeMounts: - name: "redis-conf" #挂载configmap生成的文件 mountPath: "/etc/redis" #挂载到哪个路径下 - name: "redis-data" #挂载持久卷的路径 mountPath: "/var/lib/redis" volumes: - name: "redis-conf" #引用configMap卷 configMap: name: "redis-conf" items: - key: "redis.conf" #创建configMap指定的名称 path: "redis.conf" #里面的那个文件--from-file参数后面的文件 volumeClaimTemplates: #进行pvc持久卷声明, - metadata: name: redis-data spec: accessModes: - ReadWriteMany resources: requests: storage: 200M
PodAntiAffinity
:表示反亲和性,其决定了某个pod不可以和哪些Pod部署在同一拓扑域,可以用于将一个服务的POD分散在不同的主机或者拓扑域中,提高服务本身的稳定性。matchExpressions
:规定了Redis Pod要尽量不要调度到包含app为redis的Node上,也即是说已经存在Redis的Node上尽量不要再分配Redis Pod了.
另外,根据StatefulSet的规则,我们生成的Redis的6个Pod的hostname会被依次命名为$(statefulset名称)-$(序号),如下图所示:
[root@rke ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-0 1/1 Running 0 40m 10.42.2.17 192.168.1.21 <none> redis-app-1 1/1 Running 0 40m 10.42.0.15 192.168.1.114 <none> redis-app-2 1/1 Running 0 40m 10.42.1.13 192.168.1.20 <none> redis-app-3 1/1 Running 0 40m 10.42.2.18 192.168.1.21 <none> redis-app-4 1/1 Running 0 40m 10.42.0.16 192.168.1.114 <none> redis-app-5 1/1 Running 0 40m 10.42.1.14 192.168.1.20 <none>
如上,可以看到这些Pods在部署时是以{0..N-1}的顺序依次创建的。注意,直到redis-app-0状态启动后达到Running状态之后,redis-app-1 才开始启动。
同时,每个Pod都会得到集群内的一个DNS域名,格式为$(podname).$(service name).$(namespace).svc.cluster.local
,也即是:
redis-app-0.redis-service.default.svc.cluster.local redis-app-1.redis-service.default.svc.cluster.local ...以此类推...
在K8S集群内部,这些Pod就可以利用该域名互相通信。我们可以使用busybox镜像的nslookup检验这些域名:
[root@k8s-node1 ~]# kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh If you don't see a command prompt, try pressing enter. / # nslookup redis-app-1.redis-service.default.svc.cluster.local Server: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-1.redis-service.default.svc.cluster.local Address: 10.42.0.15 *** Can't find redis-app-1.redis-service.default.svc.cluster.local: No answer / # nslookup redis-app-0.redis-service.default.svc.cluster.localServer: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-0.redis-service.default.svc.cluster.local Address: 10.42.2.17
可以看到, redis-app-0的IP为10.42.2.17。当然,若Redis Pod迁移或是重启(我们可以手动删除掉一个Redis Pod来测试),则IP是会改变的,但Pod的域名、SRV records、A record都不会改变。
另外可以发现,我们之前创建的pv都被成功绑定了:
[root@k8s-node1 ~]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-2 1h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-3 1h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 1h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 1h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 1h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-1 1h
创建好6个Redis Pod后,我们还需要利用常用的Redis-tribe工具进行集群的初始化。
创建centos容器
由于Redis集群必须在所有节点启动后才能进行初始化,而如果将初始化逻辑写入Statefulset中,则是一件非常复杂而且低效的行为。这里,本人不得不称赞一下原项目作者的思路,值得学习。也就是说,我们可以在K8S上创建一个额外的容器,专门用于进行K8S集群内部某些服务的管理控制。
这里,我们专门启动一个Ubuntu的容器,可以在该容器中安装Redis-tribe,进而初始化Redis集群,执行:
kubectl run -i --tty centos --image=centos --restart=Never /bin/bash
成功后,我们可以进入centos容器中,原项目要求执行如下命令安装基本的软件环境:
cat >> /etc/yum.repo.d/epel.repo<<'EOF'[epel] name=Extra Packages for Enterprise Linux 7 - $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearchfailovermethod=priority enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 EOF
初始化redis集群
首先,我们需要安装redis-trib(redis集群命令行工具):
yum -y install redis-trib.noarch bind-utils-9.9.4-72.el7.x86_64
然后创建一主一从的集群节点信息:
redis-trib create --replicas 1 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-3.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-4.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-5.redis-service.default.svc.cluster.local`:6379#create: 创建一个新的集群#--replicas 1 : 创建的集群中每个主节点分配一个从节点,达到3主3从#后面跟的就是redis实例所在的位置
如上,命令dig +short redis-app-0.redis-service.default.svc.cluster.local
用于将Pod的域名转化为IP,这是因为redis-trib
不支持域名来创建集群。
执行完成后redis-trib
会打印一份预配置文件给你查看,如果没问题输入yes,redis-trib
就会把这份配置文件应用到集群中
>>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.42.2.17:6379 10.42.0.15:6379 10.42.1.13:6379 Adding replica 10.42.2.18:6379 to 10.42.2.17:6379 Adding replica 10.42.0.16:6379 to 10.42.0.15:6379 Adding replica 10.42.1.14:6379 to 10.42.1.13:6379 M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379 slots:5461-10922 (5462 slots) master M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379 slots:10923-16383 (5461 slots) master S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379 replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379 replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379 replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f Can I set the above configuration? (type 'yes' to accept):
输入yes后开始创建集群
>>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join... >>> Performing Cluster Check (using node 10.42.2.17:6379) M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slots: (0 slots) slave replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slots: (0 slots) slave replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slots: (0 slots) slave replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
最后一句表示集群中的16384
个槽都有至少一个主节点在处理, 集群运作正常.
至此,我们的Redis集群就真正创建完毕了,连到任意一个Redis Pod中检验一下:
root@k8s-node1 ~]# kubectl exec -it redis-app-2 /bin/bashroot@redis-app-2:/data# /usr/local/bin/redis-cli -c127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:186 cluster_stats_messages_pong_sent:199 cluster_stats_messages_sent:385 cluster_stats_messages_ping_received:194 cluster_stats_messages_pong_received:186 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:385 127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 master - 0 1550555011000 3 connected 10923-16383 e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slave 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 0 1550555011512 6 connected 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550555010507 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550555011000 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550555011713 5 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550555010000 1 connected 0-5460
另外,还可以在NFS上查看Redis挂载的数据:
[root@rke ~]# tree /usr/local/kubernetes/redis//usr/local/kubernetes/redis/ ├── pv1 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv2 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv3 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv4 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv5 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf └── pv6 ├── appendonly.aof ├── dump.rdb └── nodes.conf 6 directories, 18 files
前面我们创建了用于实现statefulset的headless service,但该service没有cluster IP,因此不能用于外界访问.所以我们还需要创建一个service,专用于为Redis集群提供访问和负载均衡:
piVersion: v1 kind: Service metadata: name: redis-access-service labels: app: redis spec: ports: - name: redis-port protocol: "TCP" port: 6379 targetPort: 6379 selector: app: redis appCluster: redis-cluster
如上,该Service名称为 redis-access-service
,在K8S集群中暴露6379端口,并且会对labels name
为app: redis
或appCluster: redis-cluster
的pod进行负载均衡。
创建后查看:
[root@rke ~]# kubectl get svc redis-access-service -o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR redis-access-service ClusterIP 10.43.40.62 <none> 6379/TCP 47m app=redis,appCluster=redis-cluster
如上,在k8s集群中,所有应用都可以通过10.43.40.62:6379
来访问redis集群,当然,为了方便测试,我们也可以为Service添加一个NodePort映射到物理机上,待测试。
在K8S上搭建完好Redis集群后,我们最关心的就是其原有的高可用机制是否正常。这里,我们可以任意挑选一个Master的Pod来测试集群的主从切换机制,如redis-app-2
:
[root@rke ~]# kubectl get pods redis-app-2 -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 1h 10.42.1.13 192.168.1.20 <none>
进入redis-app-2
查看:
[root@rke ~]# kubectl exec -it redis-app-2 /bin/bashroot@redis-app-2:/data# redis-cli127.0.0.1:6379> role 1) "master"2) (integer) 9478 3) 1) 1) "10.42.1.14" 2) "6379" 3) "9478"
如上可以看到,其为master,slave
为10.42.1.14
即redis-app-5
。
接着,我们手动删除redis-app-2
:
[root@rke ~]# kubectl delete pods redis-app-2pod "redis-app-2" deleted [root@rke ~]# kubectl get pods redis-app-2 -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 19s 10.42.1.15 192.168.1.20 <none>
如上,IP改变为10.42.1.15
。我们再进入redis-app-2
内部查看:
[root@rke ~]# kubectl exec -it redis-app-2 /bin/bashroot@redis-app-2:/data# redis-cli127.0.0.1:6379> ROLE 1) "slave"2) "10.42.1.14"3) (integer) 6379 4) "connected"5) (integer) 9688
如上,redis-app-2
变成了slave
,从属于它之前的从节点10.42.1.14
即redis-app-5
。
我们现在这个集群中有6个节点三主三从
,我现在添加两个pod节点,达到4主4从
cat >> /etc/exports <<'EOF'/usr/local/kubernetes/redis/pv7 192.168.0.0/16(rw,all_squash) /usr/local/kubernetes/redis/pv8 192.168.0.0/16(rw,all_squash) EOF systemctl restart nfs rpcbind [root@rke ~]# mkdir /usr/local/kubernetes/redis/pv{7..8}[root@rke ~]# chmod 777 /usr/local/kubernetes/redis/*
[root@rke redis]# cat pv_add.yml --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv7 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 192.168.1.253 path: "/usr/local/kubernetes/redis/pv7" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv8 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 192.168.1.253 path: "/usr/local/kubernetes/redis/pv8"
创建查看pv:
[root@rke redis]# kubectl create -f pv_add.ymlpersistentvolume/nfs-pv7 created persistentvolume/nfs-pv8 created [root@rke redis]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-1 2h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-2 2h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 2h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 2h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 2h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-3 2h nfs-pv7 200M RWX Retain Available 7s nfs-pv8 200M RWX Retain Available 7s
更改redis的yml文件里面的replicas:
字段,把这个字段改为8,然后升级运行
[root@rke redis]# kubectl apply -f redis.ymlWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply statefulset.apps/redis-app configured [root@rke redis]# kubectl get podsNAME READY STATUS RESTARTS AGE redis-app-0 1/1 Running 0 2h redis-app-1 1/1 Running 0 2h redis-app-2 1/1 Running 0 19m redis-app-3 1/1 Running 0 2h redis-app-4 1/1 Running 0 2h redis-app-5 1/1 Running 0 2h redis-app-6 1/1 Running 0 57s redis-app-7 1/1 Running 0 30s
[root@rke redis]#kubectl exec -it centos /bin/bash[root@centos /]# redis-trib add-node \`dig +short redis-app-6.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 [root@centos /]# redis-trib add-node \`dig +short redis-app-7.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379
add-node后面跟的是新节点的信息,后面是以前集群中的任意 一个节点
[root@rke redis]# kubectl exec -it redis-app-0 bashroot@redis-app-0:/data# redis-cli127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550564776000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550564776000 7 connected 10923-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550564777051 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550564776851 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550564775000 5 connected e4697a7ba460ae2979692116b95fbe1f2c8be018 10.42.0.20:6379@16379 master - 0 1550564776549 0 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550564776548 8 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550564775000 1 connected 0-5460
redis-trib.rb reshard `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379## 输入要移动的哈希槽## 移动到哪个新的master节点(ID)## all 是从所有master节点上移动
127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550566162000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550566162909 7 connected 11377-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550566161600 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550566161902 2 connected 5917-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550566162506 5 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550566161600 8 connected 0-453 5461-5916 10923-11376 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550566162000 1 connected 454-5460
对redis的相关操作可以查看:http://redisdoc.com/topic/cluster-tutorial.html#id10
来自于23.227.193.227美国伊利诺斯芝加哥网友评分!
来自于42.119.148.32越南胡志明市网友评分!
来自于85.237.206.197英国英格兰伦敦网友评分!
来自于60.246.51.76澳门特别行政区网友评分!
来自于43.249.50.166印度网友评分!
来自于106.113.13.179河北省石家庄市 电信网友评分!
来自于101.94.224.43上海市上海市 电信网友评分!
来自于124.126.3.110北京市北京市 电信网友评分!
来自于106.87.116.73重庆市重庆市 电信网友评分!
来自于49.157.47.254菲律宾网友评分!
来自于183.200.16.191山西省太原市 移动网友评分!
来自于111.58.68.171广西壮族自治区贵港市 移动网友评分!
来自于94.66.59.128希腊网友评分!
来自于94.66.59.128希腊网友评分!
来自于103.151.173.102亚太地区网友评分!
来自于39.109.191.32新加坡网友评分!
来自于106.87.116.73重庆市重庆市 电信网友评分!
来自于106.87.116.73重庆市重庆市 电信网友评分!
来自于153.3.60.41江苏省南京市 联通网友评分!
来自于3.112.41.223日本东京网友评分!
来自于104.251.178.50美国德克萨斯达拉斯网友评分!
来自于104.251.178.50美国德克萨斯达拉斯网友评分!
来自于111.55.11.245中国 移动网友评分!
来自于103.205.179.169巴基斯坦网友评分!
来自于183.200.16.191山西省太原市 移动网友评分!
来自于183.200.16.191山西省太原市 移动网友评分!
来自于176.97.73.32英国网友评分!
来自于46.232.121.89俄罗斯莫斯科网友评分!
来自于114.45.39.108台湾省台北市网友评分!
来自于164.155.132.208南非网友评分!