二十六、资源限制-ResourceQuota
ResourceQuota
对命名空间的资源限制
一、产生原因
多个项目,时间长了,会有大量无用的资源pod等。
限制资源使用,超过设置的资源,则不能创建
二、资源配额配置解析
编辑yaml文件
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-test
labels:
app: resourcequota
spec:
hard:
pods: 50
requests.cpu: 0.5
requests.memory: 512Mi
limits.cpu: 5
limits.memory: 16Gi
configmaps: 20
requests.storage: 40Gi
persistentvolumeclaims: 20
replicationcontrollers: 20
secrets: 20
services: 50
services.loadbalancers: "2"
services.nodeports: "10" #限制10个端口号
通用理解:
- requests 定义了对应的容器所需要的最小资源量,最小请求数,能运行起来最低的配置。
- limits 定义了对应容器最大可以消耗的资源上限。
ResourceQuota,你可以为整个命名空间设置一个内存或CPU限制:
- requests.cpu:这个命名空间中所有请求的最大CPU数量。
- requests.memory:这个命名空间中所有请求的最大内存量。
- limits.cpu:这个命名空间中所有限制的最大CPU数量。
- limits.memory:这个命名空间中所有限制的总和的最大内存量。
资源解释:
CPU:1 core = 1000 milli core(毫核)
;100m相当于0.1core核,最小单位1m;
内存:M 1000进制 100M字节;Mi 1024进制
三、演示示例
A. 创建命令空间
[root@k8s-master01 res]#kubectl create ns rq-test
namespace/rq-test created
B. 编辑yaml文件
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-test
labels:
app: resourcequota
spec:
hard:
pods: 2
#requests.cpu: 0.5
#requests.memory: 512Mi
#limits.cpu: 5
#limits.memory: 16Gi
configmaps: 2
#requests.storage: 40Gi
#persistentvolumeclaims: 20
#replicationcontrollers: 20
#secrets: 20
#services: 50
#services.loadbalancers: "2"
#services.nodeports: "10" #限制10个端口号
# 其他都注释掉,测试简单好看
C. 把resource创建到新建的命名空间
[root@k8s-master01 res]#kubectl create -f resourcequota.yaml -n rq-test
resourcequota/resource-test created
D. 查看resourcequota
[root@k8s-master01 res]#kubectl get resourcequotas -n rq-test
NAME AGE REQUEST LIMIT
resource-test 5m23s configmaps: 1/2, pods: 0/2
[root@k8s-master01 res]#kubectl get resourcequotas -n rq-test -oyaml
apiVersion: v1
items:
- apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-12-19T09:01:49Z"
labels:
app: resourcequota
name: resource-test
namespace: rq-test
resourceVersion: "19185867"
uid: 37c54ff7-4b09-49e1-b46c-1f747bdf347b
spec:
hard:
configmaps: "2"
pods: "2"
status:
hard:
configmaps: "2"
pods: "2"
used:
configmaps: "1"
pods: "0"
kind: List
metadata:
resourceVersion: ""
# 可以看见pods: 0/2,最大2个pod,现有0个pod
# 可以看见configmaps: 1/2,最大2个configmaps,现在有1个configmaps
E. 创建configmaps
# 查看现有configmap
[root@k8s-master01 res]#kubectl get cm -n rq-test
NAME DATA AGE
kube-root-ca.crt 1 18m
# 还能创建一个configmap
[root@k8s-master01 res]#kubectl create cm test-cm --from-file=resourcequota.yaml -n rq-test
configmap/test-cm created
# 查看configmaps已经变成2/2
[root@k8s-master01 res]#kubectl get resourcequotas -n rq-test
NAME AGE REQUEST LIMIT
resource-test 9m34s configmaps: 2/2, pods: 0/2
[root@k8s-master01 res]#kubectl get resourcequotas -n rq-test -oyaml
apiVersion: v1
items:
- apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-12-19T09:01:49Z"
labels:
app: resourcequota
name: resource-test
namespace: rq-test
resourceVersion: "19187326"
uid: 37c54ff7-4b09-49e1-b46c-1f747bdf347b
spec:
hard:
configmaps: "2"
pods: "2"
status:
hard:
configmaps: "2"
pods: "2"
used:
configmaps: "2"
pods: "0"
kind: List
metadata:
resourceVersion: ""
# 再创建一个configmaps,会失败
[root@k8s-master01 res]#kubectl create cm test-cm2 --from-file=resourcequota.yaml -n rq-test
error: failed to create configmap: configmaps "test-cm2" is forbidden: exceeded quota: resource-test, requested: configmaps=1, used: configmaps=2, limited: configmaps=2
#使用2个,限制2个,不能再创建
F. 创建pod
[root@k8s-master01 res]#kubectl create deployment rescouretestdp --image=nginx --replicas=3 -n rq-test
deployment.apps/rescouretestdp created
# 可以看见只有2个pod被创建
[root@k8s-master01 res]#kubectl get po -n rq-test
NAME READY STATUS RESTARTS AGE
rescouretestdp-d8cdf6fcf-lcf6c 1/1 Running 0 35s
rescouretestdp-d8cdf6fcf-x74hb 1/1 Running 0 35s
# 查询deploy 看见只创建2个pod
[root@k8s-master01 res]#kubectl get deploy -n rq-test
NAME READY UP-TO-DATE AVAILABLE AGE
rescouretestdp 2/3 2 2 2m16s
G. 更新resourcequota
# 修改pod数量为3
spec:
hard:
pods: 3
[root@k8s-master01 res]#kubectl replace -f resourcequota.yaml -n rq-test
[root@k8s-master01 res]#kubectl get -f resourcequota.yaml -n rq-test
NAME AGE REQUEST LIMIT
resource-test 64m configmaps: 2/2, pods: 2/3
如何查询为什么没有创建第三个pod
生产环境比较复杂,又能忽略resourcequota的配置,所以查询问题是关键。
# 1.先检查deploy
[root@k8s-master01 res]#kubectl describe deploy rescouretestdp -n rq-test
......
Normal ScalingReplicaSet 4m55s deployment-controller Scaled up replica set rescouretestdp-d8cdf6fcf to 3
# 可以看见这个deploy创建了(rescouretestdp-d8cdf6fcf) RS去管理deploy
# 2.如果deploy没有报错,去它创建的RS查看报错。
[root@k8s-master01 res]#kubectl describe rs rescouretestdp-d8cdf6fcf -n rq-test
......
Warning FailedCreate 2m45s (x9 over 8m30s) replicaset-controller (combined from similar events): Error creating: pods "rescouretestdp-d8cdf6fcf-qx87h" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
# 最后可以看到RS的报错信息,请求1个,使用2个,限制2个,所以无法创建。