Kubernetes 部署 RabbitMQ 集群教程
本文介绍如何在 Kubernetes 中部署 RabbitMQ 集群,包含从命名空间创建到配置 NFS 存储的详细步骤。
参考文档:
- RabbitMQ 集群部署
- NFS StorageClass 创建
部署步骤
1. 创建命名空间
kubectl create ns rabbitmq
2. 创建 RBAC 权限
创建文件 rabbitmq-rbac.yaml
,内容如下:
apiVersion: v1
kind: ServiceAccount
metadata:
name: rmq-cluster
namespace: rabbitmq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rmq-cluster
namespace: rabbitmq
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rmq-cluster
namespace: rabbitmq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rmq-cluster
subjects:
- kind: ServiceAccount
name: rmq-cluster
namespace: rabbitmq
执行命令应用该配置:
kubectl apply -f rabbitmq-rbac.yaml
3. 创建服务 (Service)
创建文件 rabbitmq-service.yaml
,内容如下:
kind: Service
apiVersion: v1
metadata:
labels:
app: rmq-cluster
name: rmq-cluster
namespace: rabbitmq
spec:
clusterIP: None
ports:
- name: amqp
port: 5672
targetPort: 5672
selector:
app: rmq-cluster
---
kind: Service
apiVersion: v1
metadata:
labels:
app: rmq-cluster
type: LoadBalancer
name: rmq-cluster-balancer
namespace: rabbitmq
spec:
ports:
- name: http
port: 15672
protocol: TCP
targetPort: 15672
- name: amqp
port: 5672
protocol: TCP
targetPort: 5672
selector:
app: rmq-cluster
type: NodePort
执行命令应用该配置:
kubectl apply -f rabbitmq-service.yaml
4. 创建集群的 Secret
创建文件 rabbitmq-secret.yaml
,内容如下:
kind: Secret
apiVersion: v1
metadata:
name: rmq-cluster-secret
namespace: rabbitmq
stringData:
cookie: ERLANG_COOKIE
password: RABBITMQ_PASS
url: amqp://RABBITMQ_USER:RABBITMQ_PASS@rmq-cluster-balancer
username: RABBITMQ_USER
type: Opaque
执行命令应用该配置:
kubectl apply -f rabbitmq-secret.yaml
5. 创建 ConfigMap
创建文件 rabbitmq-configmap.yaml
,内容如下:
kind: ConfigMap
apiVersion: v1
metadata:
name: rmq-cluster-config
namespace: rabbitmq
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
loopback_users.guest = false
default_user = RABBITMQ_USER
default_pass = RABBITMQ_PASS
## Clustering
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.address_type = hostname
cluster_formation.k8s.hostname_suffix = .rmq-cluster.rabbitmq.svc.cluster.local
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
queue_master_locator = min-masters
执行命令应用该配置:
kubectl apply -f rabbitmq-configmap.yaml
6. 创建 StatefulSet
创建文件 rabbitmq-cluster-sts.yaml
,内容如下:
kind: StatefulSet
apiVersion: apps/v1
metadata:
labels:
app: rmq-cluster
name: rmq-cluster
namespace: rabbitmq
spec:
replicas: 3
selector:
matchLabels:
app: rmq-cluster
serviceName: rmq-cluster
template:
metadata:
labels:
app: rmq-cluster
spec:
containers:
- args:
- -c
- cp -v /etc/rabbitmq/rabbitmq.conf ${RABBITMQ_CONFIG_FILE}; exec docker-entrypoint.sh rabbitmq-server
command:
- sh
env:
- name: RABBITMQ_DEFAULT_USER
valueFrom:
secretKeyRef:
key: username
name: rmq-cluster-secret
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
key: password
name: rmq-cluster-secret
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
key: cookie
name: rmq-cluster-secret
- name: K8S_SERVICE_NAME
value: rmq-cluster
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_NODENAME
value: rabbit@$(POD_NAME).rmq-cluster.$(POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_CONFIG_FILE
value: /var/lib/rabbitmq/rabbitmq.conf
image: registry.cn-beijing.aliyuncs.com/dotbalo/rabbitmq:3.7-management
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 30
timeoutSeconds: 10
name: rabbitmq
ports:
- containerPort: 15672
name: http
protocol: TCP
- containerPort: 5672
name: amqp
protocol: TCP
readinessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 10
timeoutSeconds: 10
volumeMounts:
- mountPath: /etc/rabbitmq
name: config-volume
readOnly: false
- mountPath: /var/lib/rabbitmq
name: rabbitmq-storage
readOnly: false
serviceAccountName: rmq-cluster
terminationGracePeriodSeconds: 30
volumes:
- configMap:
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
name: rmq-cluster-config
name: config-volume
volumeClaimTemplates:
- metadata:
name: rabbitmq-storage
spec:
accessModes:
- ReadWriteMany
storageClassName: "nfs-storage"
resources:
requests:
storage: 4Gi
执行命令应用该配置:
kubectl apply -f rabbitmq-cluster-sts.yaml
7. 配置 NFS StorageClass
详细 NFS 配置步骤可参考:NFS StorageClass 配置指南
7.1 创建命名空间
kubectl create namespace nfs
7.2 配置 RBAC 权限
创建文件 rbac.yaml
,内容如下:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
应用 RBAC 配置:
kubectl apply -f rbac.yaml
7.3 创建 NFS Provisioner
创建文件 nfs-provisioner.yaml
,内容如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: nfs
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-storage
- name: NFS_SERVER
value: <NFS_SERVER_IP> # 替换为实际 NFS 服务 IP
- name: NFS_PATH
value: <NFS_PATH> # 替换为实际 NFS 路径
volumes:
- name: nfs-client-root
nfs:
server: <NFS_SERVER_IP>
path: <NFS_PATH>
应用 NFS Provisioner 配置:
kubectl apply -f nfs-provisioner.yaml
7.4 创建 StorageClass
创建文件 nfs-StorageClass.yaml
,内容如下:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: nfs-storage
parameters:
archiveOnDelete: "false"
应用 StorageClass 配置:
kubectl apply -f nfs-StorageClass.yaml
7.5 设置默认 StorageClass
列出当前集群的 StorageClass:
kubectl get storageclass
将 nfs-storage
设置为默认 StorageClass:
kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
7.6 测试配置
创建测试 PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
应用测试 PVC 配置:
kubectl apply -f test-pvc.yaml
检查 PVC 状态:
kubectl get pvc
describe pvc test-claim
如果正常绑定,则配置成功。
7.7 错误处理
如果出现以下错误:
waiting for a volume to be created, either by external provisioner “nfs-storage” or manually created by system administrator
查看 NFS Provisioner Pod 日志:
kubectl logs -n nfs <nfs-client-provisioner-pod-name>
如果日志中报错:
unexpected error getting claim reference: selfLink was empty, can’t make reference
解决方法:
参考链接:CSDN 文章
方法一:
-
查找
kube-apiserver.yaml
文件位置:find / -name kube-apiserver.yaml
-
编辑文件,在
spec.containers.args
中添加:- --feature-gates=RemoveSelfLink=false
示例:
-
保存并退出,等待配置生效。
方法二:
对于 1.26 及以上版本的 Kubernetes 集群,可以使用 Helm 安装驱动:
# 添加 Helm 源
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
# 创建 Namespace(可选)
kubectl create ns nfs-sc-default
# 安装 NFS 驱动
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set storageClass.name=nfs-sc-default \
--set nfs.server=192.168.1.102 \
--set nfs.path=/data/storage \
--set storageClass.defaultClass=true -n nfs-sc-default
通过上述方法,可成功配置 NFS StorageClass 并解决相关问题。
总结
通过以上步骤,您可以在 Kubernetes 中成功部署 RabbitMQ 集群。