2024年云南省职业院校技能大赛-云计算应用
任务
一、私有云
一)私有云服务搭建(10)
IP 地址设置,主机名设置,磁盘分区,文件系统挂载,web、ftp、dns、ntp 等常用linux服务器安装与配置。
OpenStack云平台搭建基本变量进行配置, 安装部署数据库、Keystone 服务、Glance 服务、Nova 服务、Neutron 服务、Dashboad 服务、Cinder 服务、Swift 服务、Heat 服务、Ceph 服务、Ceilometer、manila 和 Zun 服务等 Openstack 相关组件,完成私有云平台的搭建部署
1.集群主机环境配置(1分)
控制节点主机名为controller,设置计算节点主机名为compute;
hosts文件将IP地址映射为主机名。
配置yum源。
使用提供的用户名密码,登录提供的OpenStack私有云平台,在当前租户下,使用CentOS7.9镜像,创建两台云主机,云主机类型使用4vCPU/12G/100G_50G类型。当前租户下默认存在一张网卡,自行创建第二张网卡并连接至controller和compute节点(第二张网卡的网段为10.10.X.0/24,X为工位号,不需要创建路由)。自行检查安全组策略,以确保网络正常通信与ssh连接,然后按以下要求配置服务器:
①设置控制节点主机名为controller,设置计算节点主机名为compute;
②修改hosts文件将IP地址映射为主机名;
③使用提供的http服务地址,分别设置controller节点和compute节点的yum源文件http.repo;
④配置controller节点可以无秘钥访问compute节点。
1)controller
# 关闭防火墙
systemctl stop firewalld
setenforce 0
# 修改主机名
hostnamectl set-hostname controller
bash
# 添加域名解析
cat >> /etc/hosts << EOF
192.168.100.100 controller
192.168.100.120 compute
EOF
# 下载centos镜像
cp -rfv /etc/keystone/CentOS-7-x86_64-DVD-2009.iso .
# 下载iaas镜像
# 获取centos镜像内容
mkdir /opt/centos
mount CentOS-7-x86_64-DVD-2009.iso /mnt
cp -rvf /mnt/* /opt/centos
umount /mnt
# 获取iaas镜像内容
mkdir /opt/iaas
mount chinaskills_cloud_iaas_v2.0.1.iso /mnt
cp -rvf /mnt/* /opt/iaas
umount /mnt
# 制作镜像源
mkdir /media
mv /etc/yum.repos.d/* /media
vi /etc/yum.repos.d/local.repo
[centos]
name="centos"
baseurl=file:///opt/centos/
gpgcheck=0
enabled=1
[iaas]
name="iaas"
baseurl=file:///opt/iaas/iaas-repo/
gpgcheck=0
enabled=1
yum clean all && yum repolist
# 安装ftp服务,配置yum共享
yum -y install vsftpd
echo "anon_root=/opt" >> /etc/vsftpd/vsftpd.conf
systemctl restart vsftpd
systemctl enable vsftpd
# 配置免密登陆
ssh-keygen
ssh-copy-id compute
2)compute
# 关闭防火墙
systemctl stop firewalld
setenforce 0
# 修改主机名
hostnamectl set-hostname compute
bash
# 添加域名解析
cat >> /etc/hosts << EOF
192.168.100.100 controller
192.168.100.120 compute
EOF
# 制作镜像源
mkdir /media
mv /etc/yum.repos.d/* /media
vi /etc/yum.repos.d/ftp.repo
[centos]
name="centos"
baseurl=ftp://controller/centos/
gpgcheck=0
enabled=1
[iaas]
name="iaas"
baseurl=ftp://controller/iaas/iaas-repo/
gpgcheck=0
enabled=1
yum clean all && yum repolist
2.私有云服务搭建(1分)
在控制节点和计算节点上分别安装openstack-iaas软件包。
在控制节点和计算节点上分别安装openstack-iaas软件包,根据表2配置两个节点脚本文件中的基本变量(配置脚本文件为/etc/openstack/openrc.sh)。
1)controller
# 下载软件包
yum -y install openstack-iaas
# 修改opencr.sh脚本
vi /etc/openstack/openrc.sh
# 开始部署
iaas-pre-host.sh
bash
2)compute
# 磁盘分区
fdisk /dev/vdb
n
p
+20G
n
p
+20G
n
p
+5G
w
# 下载软件包
yum -y install openstack-iaas
# 修改opencr.sh脚本
vi /etc/openstack/openrc.sh
# 开始部署
iaas-pre-host.sh
bash
3.数据库及消息服务安装与使用(1分)
在控制节点上使用安装Mariadb、RabbitMQ等服务。并进行相关操作。
在controller节点上使用iaas-install-mysql.sh 脚本安装Mariadb、Memcached、RabbitMQ等服务。安装服务完毕后,修改/etc/my.cnf文件,完成下列要求:
①设置数据库支持大小写;
②设置数据库缓存innodb表的索引,数据,插入数据时的缓冲为4G;
③设置数据库的log buffer为64MB;
④设置数据库的redo log大小为256MB;
⑤设置数据库的redo log文件组为2。
⑥修改Memcached的相关配置,将内存占用大小设置为512MB,调整最大连接数参数为2048;
⑦调整Memcached的数据摘要算法(hash)为md5;
1)controller
# 使用iaas-install-mysql.sh 脚本安装Mariadb、Memcached、RabbitMQ等服务
iaas-install-mysql.sh
# 修改/etc/my.cnf文件
cp /etc/my.cnf /etc/my.cnf.bak
vi /etc/my.cnf
[mysqld]
# 设置数据库支持大小写
lower_case_table_names=1
# 设置数据库缓存innodb表的索引,数据,插入数据时的缓冲为4G
innodb_buffer_pool_size=4G
# 设置数据库的log buffer为64MB
innodb_log_buffer_size=64M
# 设置数据库的redo log大小为256MB
innodb_log_file_size=256M
# 设置数据库的redo log文件组为2
innodb_log_files_in_group=2
# 修改memcached文件
cp /etc/sysconfig/memcached /etc/sysconfig/memcached.bak
vi /etc/sysconfig/memcached
# 调整最大连接数参数为2048
MAXCONN="2048"
# 将内存占用大小设置为512MB
CACHESIZE="512"
# 调整 Memcached 的数据摘要算法(hash)为 md5
hash_algorithm=md5
4.Keystone服务安装与使用(1分)
在控制节点上安装Keystone服务并创建用户。
在controller节点上使用iaas-install-keystone.sh脚本安装Keystone服务。
然后创建OpenStack域210Demo,其中包含Engineering与Production项目,在域210Demo中创建组Devops,其中需包含以下用户:
①Robert用户是Engineering项目的用户(member)与管理员(admin),email地址为:Robert@lab.example.com。
②George用户是Engineering项目的用户(member),email地址为:George@lab.example.com。
③William用户是Production项目的用户(member)与管理员(admin),email地址为:William@lab.example.com。
④John用户是Production项目的用户(member),email地址为:John@lab.example.com 。
1)controller
# 安装keystone组件
iaas-install-keystone.sh
# 模拟管理员
source /etc/keystone/admin-openrc.sh
# 创建域、项目、组
openstack domain create 210Demo
openstack group create devops --domain 210Demo
openstack project create Engineering --domain 210Demo
openstack project create Production --domain 210Demo
# 创建用户
openstack user create Robert --domain 210Demo --project Engineering --email Robert@lab.example.com --password 000000
openstack user create George --domain 210Demo --project Engineering --email George@lab.example.com --password 000000
openstack user create William --domain 210Demo --project Production --email William@lab.example.com --password 000000
openstack user create John --domain 210Demo --project Production --email John@lab.example.com --password 000000
# 将用户添加到组中
openstack group add user Devops Robert George William John
# 创建角色
openstack role create member
openstack role create admin
# 为用户添加角色
openstack role add --project Engineering --user Robert member
openstack role add --project Engineering --user Robert admin
openstack role add --project Engineering --user George member
openstack role add --project Production --user William member
openstack role add --project Production --user William admin
openstack role add --project Production --user John member
5.Glance安装与使用(1分)
在控制节点上安装Glance 服务。上传镜像至平台,并设置镜像启动的要求参数。
在controller节点上使用iaas-install-glance.sh脚本安装glance服务。然后将cirros-0.5.2-x86_64-disk.img镜像上传到OpenStack平台中,镜像名称为cirros-0.5.2。完成后提交控制节点的用户名、密码和IP地址到答题框。
1)controller
# 安装glance
iaas-install-glance.sh
# 获取镜像
# cp -rfv /opt/iaas/images/cirros-0.3.4-x86_64-disk.img .
# 上传镜像(因为没有cirros-0.5.2镜像,示例用cirros-0.3.4代替)
glance image-create --name cirros-0.3.4 --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img
6.Nova安装(1分)
在控制节点和计算节点上分别安装Nova服务。安装完成后,完成Nova相关配置。
在controller节点和compute节点上分别使用iaas-install-placement.sh脚本、iaas-install-nova -controller.sh脚本、iaas-install-nova-compute.sh脚本安装Nova服务。在OpenStack中,修改相关配置文件,修改调度器规则采用缓存调度器,缓存主机信息,提升调度时间。配置完成后提交controller点的用户名、密码和IP地址到答题框。
1)controller
# 安装nova
iaas-install-placement.sh
iaas-install-nova-controller.sh
# 修改调度器规则采用缓存调度器,缓存主机信息,提升调度时间
vi /etc/nova/nova.conf
driver=caching_scheduler
# 重启
systemctl restart openstack-nova*
2)compute(先运行)
# 安装nova
iaas-install-nova-compute.sh
7.Neutron安装(1分)
在控制和计算节点上正确安装Neutron服务。
使用提供的脚本iaas-install-neutron-controller.sh和iaas-install-neutron-compute.sh,在controller和compute节点上安装neutron服务。完成后提交控制节点的用户名、密码和IP地址到答题框。
1)controller
# 安装neutron
iaas-install-neutron-controller.sh
2)compute
# 安装neutron
iaas-install-neutron-compute.sh
8.Dashboard安装(1分)
在控制节点上安装Dashboard服务。安装完成后,将Dashboard中的 Django数据修改为存储在文件中。
在controller节点上使用iaas-install-dashboad.sh脚本安装Dashboard服务。安装完成后,修改相关配置文件,完成下列两个操作:
①使得登录Dashboard平台的时候不需要输入域名;
②将Dashboard中的Django数据修改为存储在文件中。
1)controller
# 安装neutron
iaas-install-dashboard.sh
# 使得登录Dashboard平台的时候不需要输入域名
echo "OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False" >> /etc/openstack-dashboard/local_settings
# 将Dashboard中的Django数据修改为存储在文件中
echo "SESSION_ENGINE = 'django.contrib.sessions.backends.file'" >> /etc/openstack-dashboard/local_settings
9.Swift安装(1分)
在控制节点和计算节点上分别安装Swift服务。安装完成后,将cirros镜像进行分片存储。
在控制节点和计算节点上分别使用iaas-install-swift-controller.sh和iaas-install-swift-compute.sh脚本安装Swift服务。安装完成后,使用命令创建一个名叫examcontainer的容器,将cirros-0.3.4-x86_64-disk.img镜像上传到examcontainer容器中,并设置分段存放,每一段大小为10M。完成后提交控制节点的用户名、密码和IP地址到答题框。
1)controller
# 安装swift
iaas-install-swift-controller.sh
# 创建examcontainer容器
swift post examcontainer
# 将cirros-0.3.4-x86_64-disk.img镜像上传到examcontainer容器中,设置分段存放,每一段大小为10M
swift upload examcontainer -S 10M cirros-0.3.4-x86_64-disk.img
2)compute
# 安装swift
iaas-install-swift-compute.sh
10.Cinder安装与创建硬盘(1分)
注意:实际比赛无此项内容
在控制节点和计算节点分别安装Cinder服务,请在计算节点,对块存储进行扩容操作。
在控制节点和计算节点分别使用iaas-install-cinder-controller.sh、iaas-install-cinder-compute.sh脚本安装Cinder服务,请在计算节点,对块存储进行扩容操作,即在计算节点再分出一个5G的分区,加入到cinder块存储的后端存储中去。完成后提交计算节点的用户名、密码和IP地址到答题框。
1)controller
# 安装cinder
iaas-install-cinder-controller.sh
2)comput
# 安装cinder
iaas-install-cinder-compute.sh
# 再分出一个5G的分区
fdisk /dev/vdb
n
p
+5G
w
# 加入到cinder块存储的后端存储中
pvcreate /dev/vdb4
vgextend cinder-volumes /dev/vdb4
vgs
安装完成后,使用浏览器访问OpenStack云平台,访问地址为:http://192.168.100.110/dashboard,domain为demo,用户名为admin,密码为000000。
二)私有云服务运维(25)
OpenStack 云平台运维、OpenStack 相关组件运维运维。
应用系统迁移上云,云平台排错与系统优化
1.使用Heat模板创建用户(4分)
编写Heat模板create_user.yaml,创建名为heat-user的用户。
使用自己搭建的OpenStack私有云平台,使用heat编写摸板(heat_template_version: 2016-04-08)创建名为”chinaskills”的domain,在此domain下创建名为beijing_group的租户,在此租户下创建名为cloud的用户,将此文件命名及保存在/root/user_create.yml,完成后提交controller点的用户名、密码和IP地址到答题框。(竞赛系统会执行yaml文件,请确保执行的环境)
# 编写create_user.yaml
vi create_user.yaml
heat_template_version: 2016-04-08
resources:
chinaskills:
properties:
name: chinaskills
enabled: true
type: OS::Keystone::Domain
keystoneProject:
properties:
domain: {get_resource: chinaskills}
name: beijing_group
type: OS::Keystone::Project
keystoneUser:
properties:
default_project: {get_resource: keystoneProject}
name: cloud
domain: {get_resource: chinaskills}
type: OS::Keystone::User
# 测试
openstack stack create -t create_user.yaml heat-user
2.使用Heat模板创建网络(4分)
在自行搭建的OpenStack私有云平台上,编写Heat模板文件,完成网络的创建。
在自行搭建的OpenStack私有云平台上,在/root目录下编写Heat模板create_net.yaml,创建名为Heat-Network网络,选择不共享;创建子网名为Heat-Subnet,子网网段设置为10.20.2.0/24,开启DHCP服务,地址池为10.20.2.20-10.20.2.100。完成后提交控制节点的用户名、密码和IP地址到答题框。(在提交信息前请准备好yaml模板执行的环境)
# 编写create_net.yaml
vi create_net.yaml
heat_template_version: 2014-10-16
description: Generated template
resources:
network_1:
type: OS::Neutron::Net
properties:
admin_state_up: true
name: Heat-Network
shared: false
subnet_1:
type: OS::Neutron::Subnet
properties:
allocation_pools:
- end: 10.20.2.100
start: 10.20.2.10
cidr: 10.20.2.0/24
enable_dhcp: true
host_routes: []
ip_version: 4
name: Heat-Subnet
network_id:
get_resource: network_1
# 测试
openstack stack create -t create_net.yaml test
3.Linux系统调优(4分)
修改系统配置文件,要求将回写磁盘的时间临时调整为60秒。
Linux系统内存中会存在脏数据,一般系统默认脏数据30秒后会回写磁盘,修改系统配置文件,要求将回写磁盘的时间临时调整为60秒。完成后提交controller节点的用户名、密码和IP地址到答题框。
# 修改回写磁盘的时间
vi /etc/sysctl.conf
vm.dirty_expire_centisecs = 6000
# 刷新信息
sysctl -p
4.Glance调优(4分)
在OpenStack平台中,修改相关配置文件,将子进程数量相应的配置修改成2。
在OpenStack平台中,glance-api处理请求的子进程数量默认是0,只有一个主进程,请修改相关配置文件,将子进程数量相应的配置修改成2,这样的话有一个主进程加2个子进程来并发处理请求。完成后提交控制节点的用户名、密码和IP地址到答题框。
# 将子进程数量相应的配置修改成2
vi /etc/glance/glance-api.conf
workers = 2
5.Ceph部署(4分)
注意:实际比赛次项内容已替换为其他题目
使用提供的ceph.tar.gz软件包,安装ceph服务并完成初始化操作。
使用提供的ceph-14.2.22.tar.gz软件包,在OpenStack平台上创建三台CentOS7.9系统的云主机,使用这三个节点安装ceph服务并完成初始化操作,第一个节点为mon/osd节点,第二、三个节点为osd节点,部署完ceph后,创建vms、images、volumes三个pool。完成后提交第一个节点的用户名、密码和IP地址到答题框。
准备:在OpenStack平台中,使用提供的CentOS7.9镜像创建3个云主机,flavor使用2vCPU/4G/40G硬盘+临时磁盘20G类型
1)ceph-node1
# 修改主机名
hostnamectl set-hostname ceph-node1
bash
# 查看是否有20 GB的空闲硬盘
lsblk
# 修改域名解析
vi /etc/hosts
172.128.11.15 ceph-node1
172.128.11.26 ceph-node2
172.128.11.64 ceph-node3
# 上传ceph包,解压
tar -zxvf ceph-14.2.22.tar.gz -C /opt
# 移除原yum源
mv /etc/yum.repos.d/* /media/
# 配置yum源
vi /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=file:///opt/ceph
gpgcheck=0
enabled=1
# 下载ceph-deploy工具
yum -y install ceph-deploy
# 创建目录
mkdir /etc/ceph
# 进入创建的目录
cd /etc/ceph
# 创建一个集群
ceph-deploy new ceph-node1
# 使用ceph-deploy工具在所有节点上安装Ceph二进制软件包
ceph-deploy install ceph-node1 ceph-node2 ceph-node3 --no-adjust-repos
# 验证是否安装成功
ceph -v
# 安装mgr
ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
# 检查集群的状态,这个时候Ceph集群并不处于健康状态
ceph -s
# 禁用不安全模式
ceph config set mon auth_allow_insecure_global_id_reclaim false
# 查看集群状态,集群是HEALTH_OK状态
ceph -s
# 开放权限给其他节点,进行灾备处理
ceph-deploy admin ceph-node{1,2,3}
chmod +r /etc/ceph/ceph.client.admin.keyring
# 创建vms、images、volumes三个pool
ceph osd pool create vms 8
ceph osd pool create images 8
ceph osd pool create volumes 8
# 列表Ceph存储池
ceph osd lspools
2)ceph-node2
# 修改主机名
hostnamectl set-hostname ceph-node2
bash
# 验证是否安装成功
ceph -v
3)ceph-node3
# 修改主机名
hostnamectl set-hostname ceph-node3
bash
# 验证是否安装成功
ceph -v
6.Glance对接Ceph存储(5分)
注意:实际比赛无此项内容
修改OpenStack平台中Glance服务的配置文件,将Glance后端存储改为Ceph存储。
在自己搭建的OpenStack平台中修改glance服务的相关配置文件,将glance后端存储改为ceph存储。也就是所以的镜像会上传至ceph的images pool中。通过命令使用cirros-0.3.4-x86_64-disk.img镜像文件上传至云平台中,镜像命名为cirros。完成后提交controller节点的用户名、密码和IP地址到答题框。
# Glance后端存储改为Ceph存储
vi /etc/glance/glance-api.conf
# 存储的类型格式
# stores = rbd
# 参数改成RDB
default_store = rbd
# 存储的大小
# rbd_store_chunk_size = 8
# 是ceph存储池的名称
rbd_store_pool = images
# 是ceph的用户名
rbd_store_user = glance
# 是ceph配置文件的路径
rbd_store_ceph_conf = /etc/ceph/ceph.conf
# 重启服务
systemctl restart openstack-glance*
# 上传镜像文件至openstack平台
openstack image create --disk-format qcow2 --min-disk 10 --min-ram 1024 --file cirros-0.3.4-x86_64-disk.img cirros
三)私有云运维开发(15)
使用自动化运维工具 Ansible 完成系统的自动化部署与管理。
基于 OpenStack APIs 与SDK,开发私有云运维程序
1.OpenStack Python运维开发:实现镜像管理(7分)
编写Python代码,实现OpenStack镜像增删查改。
在controller节点的/root目录下创建create_image.py文件,编写python代码对接OpenStack API,完成镜像的上传与查询。
①创建镜像:要求在OpenStack私有云平台中上传镜像cirros-0.3.4-x86_64-disk.img,名字为pvm_image,disk_format为qcow2,container_format为bare。
②查询镜像:查询pvm_image的详细信息,并控制台输出。
# 编写Python代码,实现OpenStack镜像增查
vi create_image.py
# encoding:utf-8
import requests,json,time
def get_auth_token(controller_ip,domain,name,password):
try:
url = f"http://{controller_ip}:5000/v3/auth/tokens"
body = {
"auth": {
"identity": {
"methods": ['password'],
"password": {
"user": {
"domain": {"name": domain},
"name": name,
"password": password,
}
}
},
"scope": {
"project": {
"domain": {"name": domain},
"name": name
}
}
}
}
headers = {
"Content-Type": "application/json"
}
token = requests.post(url,headers=headers,data=json.dumps(body)).headers['X-Subject-Token']
headers = {
"X-Auth-Token": token
}
print(f"token值为:{token}")
return headers
except Exception as e:
print(f"token获取失败,{e}")
class image_manager:
def __init__(self,handers:dict,resUrl):
self.headers = handers
self.resUrl = resUrl
def create_image(self,image_name,disk_format,container_format):
body = {
"name": image_name,
"disk_format": disk_format,
"container_format": container_format,
}
req = requests.post(self.resUrl,headers=self.headers,data=json.dumps(body)).text
print(f"创建镜像的信息为:{req}")
return req
def get_image_id(self,name):
req = json.loads(requests.get(self.resUrl,headers=self.headers).text)
for image in req['images']:
if image['name'] == name:
return image['id']
return "NONE"
def upload_image(self,id,file_path:str):
url = self.resUrl + "/" + id + "/file"
self.headers["Content-Type"] = "application/octet-stream"
req = requests.put(url,headers=self.headers,data=open(file_path,'rb').read())
if req.status_code == 204:
print("上传镜像成功",req.status_code)
else:
print("上传镜像失败",req.status_code)
print(f"镜像上传信息:{req}")
return req
def get_image(self,id):
url = self.resUrl + "/" + id
req = json.loads(requests.get(self.resUrl,headers=self.headers).text)
print(f"获取到的镜像信息为:{req}")
return req
def delete_image(self,id):
url = self.resUrl + "/" + id
req = requests.delete(url,headers=self.headers)
print(f"删除信息:{req}")
return req
if __name__ == "__main__":
controller_ip = "10.26.16.133"
domain = "demo"
name = "admin"
password = "000000"
headers = get_auth_token(controller_ip, domain, name, password)
image_m = image_manager(headers,f"http://{controller_ip}:9292/v2/images")
#create
create_image = image_m.create_image("cirros001","qcow2","bare")
#get id
get_id = image_m.get_image_id("cirros001")
print(f"cirros001镜像ID为:{get_id}")
#upload
upload_image = image_m.upload_image(get_id,"cirros-0.3.4-x86_64-disk.img")
#get image
get_image = image_m.get_image(get_id)
with open("image_demo.json","w")as outfile:
json.dump(get_image,outfile,indent=4)
2.OpenStack用户管理服务接口开发(8分)
使用已建好的OpenStack Python运维开发环境,在/root目录下创建user_manager.py脚本,编写Python代码,端口为5043,IP地址为0.0.0.0,开发出OpenStack用户管理的接口,需要实现的接口如下:
①GET /user/<name>,自行调用查询接口,查询指定名称<name>的用户;返回信息以json格式输出到控制台。
②POST /user/create,自行调用创建接口,创建名为chinaskill的用户,密码为123456,返回信息以json格式输出到控制台。
③DELETE /user/delete/<name>,自行调用删除接口,删除指定名称的用户,若删除成功,返回信息输出到控制台。
# 编写api_user_manager.py
vi api_user_manager.py
# encoding:utf-8
import requests, json, time
import logging
# -----------logger-----------
# get logger
logger = logging.getLogger(__name__)
# level
logger.setLevel(logging.DEBUG)
# format
format = logging.Formatter('%(asctime)s %(message)s')
# to console
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(format)
logger.addHandler(stream_handler)
# -----------logger-----------
def get_auth_token(controller_ip, domain, user, password):
'''
:param controller_ip: openstack master ip address
:param domain: current user's domain
:param user: user name
:param password: user password
:return: keystoen auth Token for current user.
'''
try:
url = "http://controller:5000/v3/auth/tokens"
body = {
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": domain
},
"name": user,
"password": password
}
}
},
"scope": {
"project": {
"domain": {
"name": domain
},
"name": user
}
}
}
}
headers = {
"Content-Type": "application/json",
}
print(body)
Token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token']
headers = {
"X-Auth-Token": Token
}
logger.debug(f"获取Token值:{str(Token)}")
return headers
except Exception as e:
logger.error(f"获取Token值失败,请检查访问云主机控制节点IP是否正确?输出错误信息如下:{str(e)}")
exit(0)
# 用户管理
# https://docs.openstack.org/api-ref/identity/v3/index.html#users
class user_manager:
def __init__(self, handers: dict, resUrl: str):
self.headers = handers
self.resUrl = resUrl
def create_users(self, user_name, password: str, desc: str):
"""
create a user with name and password and description.
"""
body = {
"user": {
"name": user_name,
"password": password,
"description": desc,
}
}
status_code = requests.post(self.resUrl, data=json.dumps(body), headers=self.headers).text
logger.debug(f"返回状态:{str(status_code)}")
return status_code
def get_users(self):
"""
get user
"""
status_code = requests.get(self.resUrl, headers=self.headers).text
logger.debug(f"返回状态:{str(status_code)}")
return status_code
def get_user_id(self, user_name):
"""
get user id by name.
"""
result = json.loads(requests.get(self.resUrl, headers=self.headers).text)
user_name = user_name
for item in result['users']:
if item['name'] == user_name:
return item['id']
return "NONE"
def get_user(self, id: str):
"""
get a flavor by id.
"""
api_url = self.resUrl + "/" + id
result = json.loads(requests.get(api_url, headers=self.headers).text)
logger.debug(f"返回信息:{str(result)}")
return result
def delete_user(self, name: str):
"""
delete a user by id.
"""
id = self.get_user_id(name)
api_url = self.resUrl + "/" + id
response = requests.delete(api_url, headers=self.headers)
if response.status_code == 204:
return {"User itemDeletedSuccess": response.status_code}
result = json.loads(response.text)
logger.debug(f"返回信息:{str(result)}")
return result
def update_User_password(self, id: str, original_password: str, new_password: str):
"""
update a flavor desc by id.
"""
self.headers['Content-Type'] = "application/json"
body = {
"user": {
"password": new_password,
"original_password": original_password
}
}
api_url = self.resUrl + "/" + id + "/password"
response = requests.post(api_url, data=json.dumps(body), headers=self.headers)
# Normal response codes: 204 without return text
if response.status_code == 204:
return {"item Update Password Success": response.status_code}
result = json.loads(response.text)
logger.debug(f"返回信息:{str(result)}")
return result
if __name__ == '__main__':
# 1. openstack allinone (controller ) credentials
# host ip address
# controller_ip = "10.24.2.22"
controller_ip = "controller"
# controller_ip = "10.24.2.22"
# domain name
domain = "demo"
# user name
user = "admin"
# user password
password = "000000"
headers = get_auth_token(controller_ip, domain, user, password)
print("headers:", headers)
# get all user
user_m = user_manager(headers, "http://controller:5000/v3/users")
# 1 查询所有
users = user_m.get_users()
print("查询所有users:", users)
# 编写user_manager.py
vi user_manager.py
#encoding:utf-8
import argparse
import api_user_manager
import json
import csv
import yaml
controller_ip = "controller"
domain = "demo"
user = "admin"
password = "000000"
headers = api_user_manager.get_auth_token(controller_ip, domain, user, password)
print("headers:", headers)
user_m = api_user_manager.user_manager(headers, "http://controller:5000/v3/users")
print("-----------begin-----------------")
def define_args(parser):
"""
定义程序支持的args
:return:
"""
# parser = argparse.ArgumentParser()
#增加控制命令(postion 位置参数,必须)
parser.add_argument('command',
help='Resource command name',
type=str)
# parser.add_argument('delete',
# help='delete a resource',
# type=str)
#可选参数(可有可无)
parser.add_argument('-n', '--name', # 可选参数,删除的名称
help='The Name of the resource', # 输入-h展示
type=str)
parser.add_argument('-i', '--input', # 可选参数,删除的名称
help='The input json format text ', # 输入-h展示
type=str)
parser.add_argument('-o', '--output', # 可选参数,删除的名称
help='The output file path ', # 输入-h展示
type=str)
def parse_args(parser):
args = parser.parse_args()
if args.command:
if args.command == "create":
print("create some thing")
create_user(args)
elif args.command == "getall":
print("getall some thing")
getall_users(args)
elif args.command == "get":
print("get some thing")
get_user(args)
elif args.command == "delete":
print("delete some thing")
delete_user(args)
else:
print("Note support command name!")
def create_user(args):
print('Provided command value is %r.' % args.command)
print('Provided input value is %r.' % args.input)
print('Provided output value is %r.' % args.output)
output_file = args.output
# user_name, password: str, desc: str):
user_dict = json.loads(args.input)
result = user_m.create_users(user_dict["name"],user_dict["password"],user_dict["description"])
# 写出json文件
print("--------write to json---------:", result)
print(result)
def delete_user(args):
print('Provided command value is %r.' % args.command)
print('Provided input value is %r.' % args.input)
print('Provided output value is %r.' % args.output)
result = user_m.delete_user(args.name)
print(result)
def getall_users(args):
print('Provided command value is %r.' % args.command)
print('Provided input value is %r.' % args.input)
print('Provided output value is %r.' % args.output)
print(type(args.input))
result = user_m.get_users()
output_file = args.output
# 写出json文件
print("--------result---------")
print(result)
configuration = json.loads(result)
# 写出yaml (dict)
with open(output_file, 'w') as yaml_file:
yaml.dump(configuration, yaml_file)
print(result)
def get_user(args):
print('Provided command value is %r.' % args.command)
print('Provided input value is %r.' % args.input)
print('Provided output value is %r.' % args.output)
id = user_m.get_user_id(args.name)
result = user_m.get_user(id)
output_file = args.output
# 写出json文件
with open(output_file, 'w') as jsonfile:
json.dump(result, jsonfile, indent=4)
print(result)
if __name__ == '__main__':
import sys
print(sys.argv)
parser = argparse.ArgumentParser()
define_args(parser)
parse_args(parser)
# 创建1个用户
python3 user_manager.py create --input '{ "name": "user01", "password": "000000", "description": "description" } '
# 查询给定具体名称的用户查询
python3 user_manager.py get --name user01 -o user.json
# 查询目前admin账号下所有的用户
python3 user_manager.py getall -o openstack_all_user.yaml
# 删除指定的名称的用户
python3 user_manager.py delete --name user01
二、容器云
一)容器云服务搭建(10)
DockerCE 的安装与配置,Docker Compose的安装、配置与使用,私有仓库搭建、配置、管理和使用。
Kubernetes 容器云平台的配置与搭建
1.部署Kubernetes容器云平台(2分)
使用OpenStack私有云平台创建两台云主机,云主机类型使用4vCPU/12G/100G类型,分别作为Kubernetes集群的Master节点和node节点,然后完成Kubernetes集群部署。
1)master
# 修改主机名
hostnamectl set-hostname master
bash
# 挂载镜像
mount -o loop chinaskills_cloud_paas_v2.0.1.iso /mnt/
# 拷贝镜像内容
cp -rfv /mnt/* /opt/
# 取消挂载
umount /mnt/
# 安装kubeeasy
cp /opt/kubeeasy /usr/bin/kubeeasy
# 安装依赖包
[root@localhost ~]# kubeeasy install depend \
--host 10.24.2.10,10.24.2.11 \
--user root \
--password Abc@1234 \
--offline-file /opt/dependencies/base-rpms.tar.gz
# 配置SSH免密钥
[root@localhost ~]# kubeeasy check ssh \
--host 10.24.2.10,10.24.2.11 \
--user root \
--password Abc@1234
# 免密钥配置
[root@localhost ~]# kubeeasy create ssh-keygen \
--master 10.24.2.10 \
--worker 10.24.2.11 \
--user root --password Abc@1234
# 在master节点部署Kubernetes集群
kubeeasy install kubernetes \
--master 10.28.0.135 \
--worker 10.28.3.85 \
--user root \
--password Abc@1234 \
--version 1.22.1 \
--offline-file /opt/kubernetes.tar.gz
# 查看集群状态
kubectl cluster-info
# 查看节点负载情况
kubectl top nodes --use-protocol-buffers
# 若集群部署失败或出现故障,可重置集群重新部署
# kubeeasy reset
2)node
# 修改主机名
hostnamectl set-hostname node
bash
2.部署Harbor镜像仓库(2分)
在Kubernetes集群中完成Harbor镜像仓库部署。
1)master
# 安装Harbor
kubeeasy add --registry harbor
# 查看harbor状态
systemctl status harbor
3.部署Istio服务网格(3分)
在Kubernetes集群中完成Istio服务网格组件部署。
1)master
# 安装Istio
kubeeasy add --istio istio
# 查看pods
kubectl -n istio-system get pods
# 查看版本
istioctl version
4.部署kubeVirt 虚拟化组件(3分)
在Kubernetes集群中完成kubeVirt虚拟化组件部署。
1)master
# 安装kubeVirt
kubeeasy add --virt kubevirt
# 查看pods
kubectl -n kubevirt get pods
# 验证
kubectl -n kubevirt get deployment
二)容器云服务运维(25)
使用容器实现系统打包,微服务系统搭建, 消息中间件系统搭建,负载均衡应用,数据库访问与管理,容器编排,访问控制。实现容器持续集成工具安装,典型工具链搭建,项目持续集成环境部署。
Kubernetes 平台容器集群的运维,包括Pod、Deployment、Service、Ingress、Istio 服务网格、网络、存储卷及安全。
基于 Kubernetes 的 KubeVirt 服务,实现虚拟机实例创建、迁移、管理与运维
1.容器化部署Node-Exporter(2分)
编写Dockerfile文件构建exporter镜像,要求基于centos完成Node-Exporter服务的安装与配置,并设置服务开机自启。
编写Dockerfile构建monitor-exporter:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)
①基础镜像:centos:centos7.9.2009;
②使用二进制包node_exporter-0.18.1.linux-amd64.tar.gz安装node-exporter服务;
③声明端口:9100;
④设置服务开机自启。
# 上传压缩包,解压
tar -zxvf Monitor.tar.gz
# 导入CentOS基础镜像
docker load -i Monitor/CentOS_7.9.2009.tar
# 编写Dockerfile
vi Dockerfile-exporter
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
WORKDIR /opt
ADD node_exporter-0.18.1.linux-amd64.tar.gz .
RUN mkdir /usr/local/node_exporter-0.18.1
RUN cp -rf node_exporter-0.18.1.linux-amd64/* /usr/local/node_exporter-0.18.1
EXPOSE 9100
CMD ["/usr/local/node_exporter-0.18.1/node_exporter"]
# 构建镜像
docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .
2.容器化部署Alertmanager(2分)
编写Dockerfile文件构建alert镜像,要求基于centos:latest完成Alertmanager服务的安装与配置,并设置服务开机自启。
编写Dockerfile构建monitor-alert:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)
①基础镜像:centos:centos7.9.2009;
②使用二进制包alertmanager-0.19.0.linux-amd64.tar.gz安装Alertmanager服务;
③声明端口:9093、9094;
④设置服务开机自启。
# 编写Dockerfile
vi Dockerfile-alert
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
WORKDIR /opt
ADD alertmanager-0.19.0.linux-amd64.tar.gz .
RUN mkdir /usr/local/alertmanager-0.19.0
RUN cp -rf alertmanager-0.19.0.linux-amd64/* /usr/local/alertmanager-0.19.0
EXPOSE 9093
EXPOSE 9094
CMD ["/usr/local/alertmanager-0.19.0/alertmanager", "--config.file=/opt/alertmanager-0.19.0.linux-amd64/alertmanager.yml"]
# 构建镜像
docker build -t monitor-alert:v1.0 -f Dockerfile-alert .
3.容器化部署Grafana(2分)
编写Dockerfile文件构建grafana镜像,要求基于centos完成Grafana服务的安装与配置,并设置服务开机自启。
编写Dockerfile构建monitor-grafana:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)
①基础镜像:centos:centos7.9.2009;
②使用二进制包grafana-6.4.1.linux-amd64.tar.gz安装grafana服务;
③声明端口:3000;
④设置nacos服务开机自启。
# 编写Dockerfile
vi Dockerfile-grafana
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
WORKDIR /opt
ADD grafana-6.4.1.linux-amd64.tar.gz .
RUN mkdir /usr/local/grafana-6.4.1
RUN cp -rfv grafana-6.4.1/* /usr/local/grafana-6.4.1
EXPOSE 3000
CMD ["/usr/local/grafana-6.4.1/bin/service"]
# 构建镜像
docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .
4.容器化部署Prometheus(2分)
编写Dockerfile文件构建prometheus镜像,要求基于centos完成Promethues服务的安装与配置,并设置服务开机自启。
编写Dockerfile构建monitor-prometheus:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)
①基础镜像:centos:centos7.9.2009;
②使用二进制包prometheus-2.13.0.linux-amd64.tar.gz安装promethues服务;
③编辑/data/prometheus/prometheus.yml文件,创建3个任务模板:prometheus、node和alertmanager,并将该文件拷贝到/data/prometheus/目录下;
④声明端口:9090;
⑤设置服务开机自启。
# 编辑prometheus.yml
vi prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['<node-ip>:9100']
- job_name: 'alertmanager'
static_configs:
- targets: ['localhost:9093']
# 编写Dockerfile
vi Dockerfile-prometheus
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
WORKDIR /opt
ADD prometheus-2.13.0.linux-amd64.tar.gz .
RUN mkdir /usr/local/prometheus-2.13.0
RUN cp -rf prometheus-2.13.0.linux-amd64/* /usr/local/prometheus-2.13.0
RUN mkdir -p /data/prometheus/
COPY prometheus.yml /data/prometheus/
EXPOSE 9090
CMD ["/usr/local/prometheus-2.13.0/prometheus", "--config.file=/data/prometheus/prometheus.yml"]
# 构建镜像
docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .
5.编排部署监控系统(2分)
编写docker-compose.yaml文件,使用镜像exporter、alert、grafana和prometheus完成监控系统的编排部署。
编写docker-compose.yaml文件,具体要求如下:
①容器1名称:monitor-node;镜像:monitor-exporter:v1.0;端口映射:9100:9100;
②容器2名称:monitor-alertmanager;镜像:monitor-alert:v1.0;端口映射:9093:9093、9094:9094;
③容器3名称:monitor-grafana;镜像:monitor-grafana:v1.0;端口映射:3000:3000;
④容器4名称:monitor-prometheus;镜像:monitor-prometheus:v1.0;端口映射:9090:9090。
# 编写docker-compose
vi docker-compose.yaml
---
version: "3.2"
services:
monitor-node:
container_name: monitor-node
image: monitor-exporter:v1.0
ports:
- "9100:9100"
restart: always
monitor-alertmanager:
container_name: monitor-alertmanager
image: monitor-alert:v1.0
ports:
- "9093:9093"
- "9094:9094"
restart: always
monitor-grafana:
container_name: monitor-grafana
image: monitor-grafana:v1.0
ports:
- "3000:3000"
restart: always
monitor-prometheus:
container_name: monitor-prometheus
image: monitor-prometheus:v1.0
ports:
- "9090:9090"
restart: always
...
# 部署容器
docker-compose up -d
6.部署GitLab(3分)
将GitLab部署到Kubernetes集群中,设置GitLab服务root用户的密码,使用Service暴露服务,并将提供的项目包导入到GitLab中。
在Kubernetes集群中新建命名空间gitlab-ci,将GitLab部署到该命名空间下,Deployment和Service名称均为gitlab,以NodePort方式将80端口对外暴露为30880,设置GitLab服务root用户的密码为admin@123,将项目包demo-2048.tar.gz导入到GitLab中并命名为demo-2048。(需要用到的软件包:CICD-Runners-demo2048.tar.gz)
# 上传解压文件
tar -zxvf Gitlab-CI.tar.gz
# 导入镜像
ctr -n k8s.io image import gitlab-ci/images/images.tar
docker load < gitlab-ci/images/images.tar
# 新建空间
kubectl create ns gitlab-ci
# 进入命名空间
cd gitlab-ci
# 部署GitLab
vi gitlab-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab
namespace: gitlab-ci
labels:
name: gitlab
spec:
selector:
matchLabels:
name: gitlab
template:
metadata:
name: gitlab
labels:
name: gitlab
spec:
containers:
- name: gitlab
image: gitlab/gitlab-ce:latest
imagePullPolicy: IfNotPresent
env:
- name: GITLAB_ROOT_PASSWORD
value: Abc@1234
- name: GITLAB_ROOT_EMAIL
value: 123456@qq.com
ports:
- name: http
containerPort: 80
volumeMounts:
- name: gitlab-config
mountPath: /etc/gitlab
- name: gitlab-logs
mountPath: /var/log/gitlab
- name: gitlab-data
mountPath: /var/opt/gitlab
volumes:
- name: gitlab-config
hostPath:
path: /home/gitlab/conf
- name: gitlab-logs
hostPath:
path: /home/gitlab/logs
- name: gitlab-data
hostPath:
path: /home/gitlab/data
# 创建server服务释放端口
vi gitlab-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: gitlab
namespace: gitlab-ci
labels:
name: gitlab
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
nodePort: 30880
selector:
name: gitlab
# 执行yaml创建
kubectl apply -f gitlab-deploy.yaml
kubectl apply -f gitlab-svc.yaml
# 查看pod
kubectl -n gitlab-ci get pods
# 查看GitLab Pod的IP地址
kubectl -n gitlab-ci get pods -owide
# 在集群中自定义hosts添加gitlab Pod的解析
kubectl edit configmap coredns -n kube-system
........
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
## 添加以下字段
hosts {
10.244.1.43 gitlab-7b54df755-6ljtp
fallthrough
}
prometheus :9153
##删除以下三行字段
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
........
# 执行
kubectl -n kube-system rollout restart deploy coredns
# 进入gitlab pod中
kubectl exec -ti -n gitlab-ci gitlab-7b54df755-6ljtp bash
# 在首行添加(ip为pod IP地址)
external_url 'http://10.244.1.43:80'
# 重启
reboot
# 退出
exit
# 查看service
kubectl -n gitlab-ci get svc
# 通过http://10.24.2.14:30880访问GitLab,用户名123456@qq.com,密码Abc@1234,
# 点击“Create a project”按钮
# 点击“Create blank project”创建项目demo-2048,可见等级选择“Public”
# 点击“Create project”,进入项目
# 将源代码推送到项目中
# 进入demo-2048
cd /root/gitlab-ci/demo-2048
# 配置用户名
git config --global user.name "administrator"
# 配置邮箱
git config --global user.email "admin@example.com"
# 移除原远程仓库
git remote remove origin
# 添加新的远程仓库
git remote add origin http://10.24.2.14:30880/root/demo-2048.git
# 将当前目录添加入缓存区
git add .
# 上传文件,并添加描述
git commit -m "initial commit"
# 将缓存区文件上传至远程仓库
git push -u origin drone
# 刷新页面
7.部署GitLab Runner(3分)
将GitLab Runner部署到Kubernetes集群中,为GitLab Runner创建持久化构建缓存目录以加速构建速度,并将其注册到GitLab中。
将GitLab Runner部署到gitlab-ci命名空间下,Release名称为gitlab-runner,为GitLab Runner创建持久化构建缓存目录/home/gitlab-runner/ci-build-cache以加速构建速度,并将其注册到GitLab中。(需要用到的软件包:CICD-Runners-demo2048.tar.gz)
# 登录GitLab管理界面(http://10.24.2.14:30880/admin),然后点击左侧菜单栏中的CI/CD下的Runners
# 点击右侧按钮
# 记录下参数Registration token的值,后续注册Runners时会用到该参数。
# 进入gitlab-ci
cd /root/gitlab-ci/
# 首先创建一个名为gitlab-ci的serviceAccount
cat runner-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-ci
namespace: gitlab-ci
# 首先创建一个名为gitlab-ci的serviceAccount
cat runner-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gitlab-ci
namespace: gitlab-ci
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
# 首先创建一个名为gitlab-ci的serviceAccount
cat runner-rb.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gitlab-ci
namespace: gitlab-ci
subjects:
- kind: ServiceAccount
name: gitlab-ci
namespace: gitlab-ci
roleRef:
kind: Role
name: gitlab-ci
apiGroup: rbac.authorization.k8s.io
# 执行剧本
kubectl apply -f runner-sa.yaml
kubectl apply -f runner-role.yaml
kubectl apply -f runner-rb.yaml
# 获取sa信息
kubectl -n gitlab-ci get sa
# 给default用户赋权
vi default.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: default
labels:
k8s-app: gitlab-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-ci
# 执行剧本
kubectl apply -f default.yaml
# 解压文件
tar -zxvf gitlab-runner-0.43.0.tgz
# 修改values.yaml文件
vi gitlab-runner/values.yaml
...
## Use the following Kubernetes Service Account name if RBAC is disabled in this Helm chart (see rbac.create)
##
# serviceAccountName: default
serviceAccountName: gitlab-ci #添加,注意缩进格式
...
## The GitLab Server URL (with protocol) that want to register the runner against
## ref: https://docs.gitlab.com/runner/commands/index.html#gitlab-runner-register
##
# gitlabUrl: http://gitlab.your-domain.com/
gitlabUrl: http://10.24.2.14:30880/ #添加,缩进顶格
...
## The Registration Token for adding new Runners to the GitLab Server. This must
## be retrieved from your GitLab Instance.
## ref: https://docs.gitlab.com/ce/ci/runners/index.html
##
# runnerRegistrationToken: ""
runnerRegistrationToken: "riU8c4D2SNkKAv8GS9q_" #添加,缩进顶格
...
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:16.04"
privileged = true #添加,注意缩进格式
# 创建一个PVC用于挂载到Pod中使用
cat gitlab-runner/templates/pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ci-build-cache-pv
namespace: gitlab-ci
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/ci-build-cache"
# 创建一个PVC用于挂载到Pod中使用
cat gitlab-runner/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ci-build-cache-pvc
namespace: gitlab-ci
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
# 编辑values.yaml文件,添加构建缓存信息配置:
vi gitlab-runner/values.yaml
## configure build cache
cibuild:
cache:
pvcName: ci-build-cache-pvc
mountPath: /home/gitlab-runner/ci-build-cache
# 官方提供的runner镜像注册runner
vi gitlab-runner/templates/configmap.yaml
cat >>/home/gitlab-runner/.gitlab-runner/config.toml <<EOF
[[runners.kubernetes.volumes.pvc]]
name = "{{.Values.cibuild.cache.pvcName}}"
mount_path = "{{.Values.cibuild.cache.mountPath}}"
EOF
# Start the runner
exec /entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner
# 部署GitLab Runner
helm -n gitlab-ci install gitlab-runner gitlab-runner
# 列出所有helm图
helm -n gitlab-ci list
# 查看pods信息
kubectl -n gitlab-ci get pods
8.部署GitLab Agent(3分)
将Kubernetes集群添加到GitLab项目中指定名称和命名空间。(需要用到的软件包:CICD-Runners-demo2048.tar.gz)
将Kubernetes集群添加到demo-2048项目中,并命名为kubernetes-agent,项目命名空间选择gitlab-ci。
# 在GitLab Admin界面下,依次点击“Settings”→“Network”,展开“Outbound requests”,勾选“Allow requests to the local network from webhooks and integrations”,并保存。
# 进入demo-2048项目,新建配置文件(.gitlab/agents/<agent-name>/config.yaml),此处为.gitlab/agents/kubernetes-agent/config.yaml
# config.yaml文件格式如下
gitops:
manifest_projects:
- id: gitlab-org/cluster-integration/gitlab-agent
default_namespace: my-ns
paths:
# Read all YAML files from this directory.
- glob: '/team1/app1/*.yaml'
# Read all .yaml files from team2/apps and all subdirectories.
- glob: '/team2/apps/**/*.yaml'
# If 'paths' is not specified or is an empty list, the configuration below is used.
- glob: '/**/*.{yaml,yml,json}'
reconcile_timeout: 3600s
dry_run_strategy: none
prune: true
prune_timeout: 3600s
prune_propagation_policy: foreground
inventory_policy: must_match
# 依次点击左侧菜单栏“Operate”→“Kubernetes clusters”
# 点击“Connect a cluster”,并选择配置文件kubernetes-agent,
# 点击“Register”
# 通过如下命令安装agent,将config.token和config.kasAddress的值修改为上一步页面显示的值
helm upgrade --install kubernetes-agent gitlab-agent-1.1.0.tgz --namespace gitlab-ci --create-namespace --set image.tag=v16.2.0 --set config.token=vTPAASMpwTW-tEQ3NHYc3y5YKCHCFep466q52dgaRCstXyXDzg --set config.kasAddress=ws://10.244.0.23/-/kubernetes-agent/
# 列出所有helm图
helm -n gitlab-ci list
# 查看pod信息
kubectl get pod -n gitlab-ci
# 点击“Close”并刷新界面
# 可以看到,Kubernetes集群已连接成功。
# 在GitLab中开启Container Registry,进入demo-2048项目,依次点击“Settings”→“CI/CD”
# 展开“Variables”,配置镜像仓库相关的参数。
# 添加REGISTRY变量,其值为Harbor仓库地址
# 添加完成
# 然后继续添加变量REGISTRY_IMAGE(demo)、REGISTRY_USER(admin)、REGISTRY_PASSWORD(Harbor12345)、REGISTRY_PROJECT(demo)和HOST(10.24.2.14),添加完成后保存变量
# 修改harbor仓库的helm配置
vi /opt/harbor/values.yaml
…
#将127.0.0.1改为master节点实际IP地址
externalURL: http://10.26.7.197:80
…
# 修改完成后,更新harbor仓库:
helm -n harbor upgrade harbor /opt/harbor
# 登录Harbor仓库新建一个公开项目demo
# 将镜像tomcat:8.5.64-jdk8推送到该项目中:
ctr -n k8s.io images tag docker.io/library/tomcat:8.5.64-jdk8 10.24.2.14/library/tomcat:8.5.64-jdk8
ctr -n k8s.io images push 10.24.2.14/library/tomcat:8.5.64-jdk8 --plain-http=true --user admin:Harbor12345
# 修改containerd配置文件
vi /etc/containerd/config.toml
……
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.com"]
endpoint = ["http://harbor.com"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."10.24.2.14"]
endpoint = ["http://10.24.2.14"]
# 重新加载服务配置
systemctl daemon-reload
# 重启服务
systemctl restart containerd
9.构建CI/CD(3分)
编写流水线脚本触发自动构建,要求基于GitLab项目完成代码的编译、镜像的构建与推送,并自动发布应用到Kubernetes集群中。
编写流水线脚本.gitlab-ci.yml触发自动构建,具体要求如下:(需要用到的软件包:CICD-Runners-demo2048.tar.gz)
①基于镜像maven:3.6-jdk-8构建项目的drone分支;
②构建镜像的名称:demo:latest;
③将镜像推送到Harbor仓库demo项目中;
④将demo-2048应用自动发布到Kubernetes集群gitlab-ci命名空间下。
# 编写.gitlab-ci.yml:
vi .gitlab-ci.yml
stages:
- build
- release
- review
variables:
MAVEN_OPTS: "-Dmaven.repo.local=/opt/cache/.m2/repository"
maven_build:
image: maven:3.6-jdk-8
stage: build
only:
- drone
script:
- cp -r /opt/repository /opt/cache/.m2/
- mvn clean install -DskipTests=true
- cd target && jar -xf 2048.war
- cp -rfv 2048 /home/gitlab-runner/ci-build-cache
image_build:
image: demo:latest
stage: release
variables:
DOCKER_DRIVER: overlay
DOCKER_HOST: tcp://localhost:2375
#CI_DEBUG_TRACE: "true"
services:
- name: demo:latest
command: ["--insecure-registry=0.0.0.0/0"]
script:
- cp -rfv /home/gitlab-runner/ci-build-cache/2048 .
- sed -i "s/10.24.2.3/$REGISTRY/g" ./Dockerfiles/Dockerfile
- docker build -t "${REGISTRY_IMAGE}:latest" -f ./Dockerfiles/Dockerfile .
- docker tag "${REGISTRY_IMAGE}:latest" "${REGISTRY}/${REGISTRY_PROJECT}/${REGISTRY_IMAGE}:latest"
- docker login -u "${REGISTRY_USER}" -p "${REGISTRY_PASSWORD}" "${REGISTRY}"
- docker push "${REGISTRY}/${REGISTRY_PROJECT}/${REGISTRY_IMAGE}:latest"
deploy_review:
image: kubectl:1.22
stage: review
only:
- drone
script:
- sed -i "s/REGISTRY/$REGISTRY/g" template/demo-2048.yaml
- kubectl apply -f template/
# 流水线脚本编写完成后会自动触发构建,进入demo-2048项目,依次点击“build”→“Pipelines”,可以看到GitLab CI开始执行构建任务了
# 点击“running”可查看构建详情
# 点击流水线的任一阶段可查看构建详情,
# 此时Runner Pod所在的namespace下面也会出现1个新的Pod:
kubectl -n gitlab-ci get pods
# 这个新Pod就是用来执行具体的Job任务的。
# 构建完成后
# 查看新发布的Pod:
kubectl -n gitlab-ci get pods
# 登录Harbor仓库,进入demo项目
# 可以看到镜像已构建并上传成功。
# 查看Service
kubectl -n gitlab-ci get svc
10.服务网格:创建VirtualService(3分)
将Bookinfo应用部署到default命名空间下,为Bookinfo应用创建一个名为reviews的VirtualService,要求来自名为Jason的用户的所有流量将被路由到reviews服务的v2版本。(需要用到的软件包:ServiceMesh.tar.gz)
# 上传解压
tar -xf ServiceMesh.tar.gz
# 导入镜像
docker load -i ServiceMesh/images/image.tar
# 初始化Kubernetes集群
init-cluster
# 查看集群状态
kubectl cluster-info
# 进入ServiceMesh目录
cd ServiceMesh/
# 执行bookinfo
kubectl apply -f bookinfo/bookinfo.yaml
# 查看pops状态
kubectl gcat bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
cat bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
# 使用Istio Gateway来实现外部访问
cat bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
# 为应用程序定义Ingress网关
kubectl apply -f bookinfo-gateway.yaml
# 确认网关创建完成
kubectl get gateway
# 查看Ingress Gateway
kubectl get svc -n istio-system
# 可以看到Gateway 80端口对应的NodePort端口是22092,在浏览器上通过http://master_IP:22092/productpage访问Bookinfo应用
# 编写目标规则配置文件
cat destination-rule-all.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
subsets:
- name: v1
labels:
version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings
spec:
host: ratings
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v2-mysql
labels:
version: v2-mysql
- name: v2-mysql-vm
labels:
version: v2-mysql-vm
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details
spec:
host: details
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
# 执行yaml
kubectl apply -f destination-rule-all.yaml
# 查询destinationrule
kubectl get destinationrule
# 重新部署productpage微服务,启用Istio
cat bookinfo/bookinfo.yaml | istioctl kube-inject -f - | kubectl apply -l app=productpage -f -
# 检查productpage的Pod并且查看每个副本的两个容器
kubectl get pods
# 在浏览器上登录Grafana(http://master_IP:33000)
# 依次点击左侧导航栏的“Dashboards” →“Manage”进入Dashboard管理界面
# 选择Istio Mesh Dashboard
# 切换到Istio Service Dashboard仪表盘,在Service中选择productpage
# 所有服务启用Istio
cat bookinfo/bookinfo.yaml | istioctl kube-inject -f - | kubectl apply -l app!=productpage -f -
# 查看应用程序Pod,现在每个Pod的两个容器
kubectl get pods
# 再次查看Istio Mesh Dashboard,会发现当前命名空间下所有服务都会出现在服务列表中
# 访问Kiali控住台(http://master_IP:20001)
# 通过可视化界面来查看应用程序的拓扑结构,点击“Graph”按钮,在Namespace下拉菜单中选择命名空间default,然后在Display下拉菜单中选中“Traffic Animation”和“Idle Nodes”复选框,就可以看到实时流量动画。
# reviews微服务v1版本不会调用ratings服务,所以图中ratings服务无流量通过。
# 访问Prometheus控制台(http://master_IP:30090)
# 在Expression输入框中输入要查询的参数,然后点击Execute按钮即可在Console中查看查询结果。
# 查询请求时采用istio_requests_total指标,这是一个标准的Istio指标。
# 如查询命名空间的所有请求(istio_requests_total{destination_service_namespace=“default”, reporter=“destination”})
# 查询reviews微服务的请求(istio_requests_total{destination_service_namespace=“default”,reporter=“destination”,destination_service_name=“reviews”})
# 部署v2版本的reviews微服务并开启Istio
cat bookinfo/reviews-v2.yaml | istioctl kube-inject -f - | kubectl apply -f -
cat bookinfo/reviews-v3.yaml | istioctl kube-inject -f - | kubectl apply -f -
# 查看Pod
kubectl get pods
# 设置Kubernetes HPA来管理三个版本Deployment的副本
kubectl autoscale deployment reviews-v1 --cpu-percent=50 --min=1 --max=10
kubectl autoscale deployment reviews-v2 --cpu-percent=50 --min=1 --max=10
kubectl autoscale deployment reviews-v3 --cpu-percent=50 --min=1 --max=10
# 为微服务设置默认版本的Virtual Service
vi virtual-service-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
# 可以通过应用Virtual Service规则将100%的流量路由reviews:v3
vi virtual-service-reviews-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v3
# 执行yaml
kubectl apply -f virtual-service-reviews-v3.yaml
# 初始化默认路由规则,将所有流量路由到服务的v1版本
kubectl apply -f virtual-service-all-v1.yaml
# 改变reviews服务的流量规则,将v1版本的流量镜像到v2版本
vi virtual-service-mirroring.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 100
mirror:
host: reviews
subset: v2
# 执行
kubectl apply -f virtual-service-mirroring.yaml
# 登录Jaeger控制台(http://master_IP:30686)
# 从仪表盘左边面板的Service下拉列表中选择“productpage.default”,然后点击“Find Traces”,
# 点击位于最上面的最近一次追踪,查看对应最近一次访问/productpage的详细信息,
# 追踪信息由一组Span组成,每个Span对应一个Bookinfo Service。这些Service在执行/productpage请求时被调用,或是Istio内部组件。
三)容器云服务运维开发(15)
基于Kubernetes APIs与SDK,使用Python脚本,完成容器云服务运维任务
1.管理service资源(5分)
Kubernetes Python运维脚本开发,使用Restful APIs方式管理service服务。
使用已经部署完成的Kubernetes两节点云平台,在Master节点安装Python 3.7.3的运行环境与依赖库。
使用python request库和Kubernetes Restful APIs,在/root目录下,创建api_manager_service.py文件,要求编写python代码,代码实现以下任务:
①首先查询查询服务service,如果service名称“nginx-svc”已经存在,先删除。
②如果不存在“nginx-svc”,则使用service.yaml文件创建服务。
③创建完成后,查询该服务的信息,查询的body部分以json格式的文件输出到当前目录下的service_api_dev.json文件中。
④然后使用service_update.yaml更新服务端口。
⑤完成更新后,查询该服务的信息,信息通过控制台输出,并通过json格式追加到service_api_dev.json文件后。
# 上传k8s_Python_Packages解压
tar -zxf k8s_Python_Packages.tar.gz
# 解压python包
tar -zxf /root/Python_Packages/python-3.6.8.tar.gz
# 安装python
yum install -y /root/python-3.6.8/packages/*
# 验证是否安装成功
pip3 --version
# 进入Python_Packages
cd Python_Packages
# 使用pip3命令安装开发环境依赖包
pip3 install --no-index --find-links=/root/Python_Packages/ -r /root/Python_Packages/requirements.txt
# 编写service.yaml
vi service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc3
namespace: default
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30083
type: NodePort
# 编写api_manager_service
vi api_manager_service.py
import requests,time
import logging
import os,yaml,json
#-----------logger-----------
#get logger
logger = logging.getLogger(__name__)
# level
logger.setLevel(logging.DEBUG)
# format
format = logging.Formatter('%(asctime)s %(message)s')
# to console
stream_handler = logging.StreamHandler()
stream_handler .setFormatter(format)
logger.addHandler(stream_handler )
#-----------logger-----------
def get_api_server_token(api_server_token, node_url):
# Bearer token
bearer_token = "bearer " + api_server_token
return bearer_token
class api_service_manager():
def __init__(self,node_url: str, bearer_token: str):
self.node_url = node_url
self.bearer_token = bearer_token
def create_svc(self, yamlFile, namespace: str):
headers = {
"Authorization": self.bearer_token,
"Content-Type": "application/json"
}
with open(yamlFile, encoding="utf8") as f:
body = json.dumps(yaml.safe_load(f))
request_url = self.node_url + "/api/v1/namespaces/" + namespace + "/services"
result = json.loads(requests.post(request_url, data=body, headers=headers, verify=False).text)
logger.debug(f"return_message:{str(result)}")
return result
def get_svc(self,svc_name:str,namespace:str):
headers = {
"Authorization": self.bearer_token,
"pretty" : "true"
}
request_url = self.node_url + "/api/v1/namespaces/" + namespace + "/services/" + svc_name
result = json.loads(requests.get(request_url, headers=headers, verify=False).text)
logger.debug(f"return_message:{str(result)}")
return result
def update_svc(self,svc_name:str,yamlFile,namespace:str):
headers = {
"Authorization": self.bearer_token,
"Content-Type": "application/strategic-merge-patch+json"
}
with open(yamlFile, encoding="utf8") as f:
body = json.dumps(yaml.safe_load(f))
# '{"spec": {"ports": [{"port": 80, "targetPort": 8089}]}}'
# body = {"spec": {"ports": [{"port": 80, "targetPort": 80}]}}
request_url = self.node_url + "/api/v1/namespaces/" + namespace + "/services/" + svc_name
resp = requests.put(request_url, data=json.dumps(body), headers=headers, verify=False)
result = json.loads(resp.text)
logger.debug(f"return_message:{str(result)}")
return result
def delete_svc(self,svc_name:str,namespace:str):
headers = {
"Authorization": self.bearer_token
}
request_url = self.node_url + "/api/v1/namespaces/" + namespace + "/services/" + svc_name
result = json.loads(requests.delete(request_url, headers=headers, verify=False).text)
logger.debug(f"return_message:{str(result)}")
return result
def get_api_service_manager():
api_server_token = "token"
cluster_server_url = "https://10.26.7.60:6443"
bearer_token = get_api_server_token(api_server_token, cluster_server_url)
svc_m = api_service_manager(cluster_server_url, bearer_token)
return svc_m
if __name__ == "__main__":
api_server_token = "token"
cluster_server_url = "https://10.26.7.60:6443"
bearer_token = get_api_server_token(api_server_token, cluster_server_url)
svc_m = api_service_manager(cluster_server_url,bearer_token)
namespace = "default"
svc_name = "nginx-svc3"
yaml_file = "python-dev-svc3.yaml"
#1delete svc
svc_m.delete_svc(svc_name,namespace)
#2create svc
print("crate ----------------")
svc_m.create_svc(yaml_file, namespace)
#3get svc
print("get ----------------")
svc_m.get_svc(svc_name,namespace)
#4delete svc
svc_m.delete_svc(svc_name,namespace)
# 编写service_server
vi service_server.py
import requests
import threading
from http.server import HTTPServer, BaseHTTPRequestHandler
import json
PORT = 8888
# request handler
import api_service_manager
svc_m = api_service_manager.get_api_service_manager()
class MyHandler(BaseHTTPRequestHandler):
#Header setting
def _set_headers(self,content_type):
self.send_response(200) # 200 stands for request succeeded
self.send_header("Content-type", content_type) # informs requests of the Media type
self.end_headers()
def do_GET(self):
self._set_headers("application/json")
print(self.path)
# /services/name
if self.path.startswith('/services/'):
name = self.path[10:]
print(name)
result = svc_m.get_svc(name, "default")
# converts dictionary to a JSON string
json_string = json.dumps(result)
self.wfile.write(json_string.encode(encoding='utf_8'))
else:#
json_string = json.dumps({'path': 'home', 'received': 'ok'})
self.wfile.write(json_string.encode(encoding='utf_8'))
def do_POST(self):
print("post")
self._set_headers("application/json")
print(self.path)
if self.path.startswith('/services/'):
filename = self.path[10:]
print("filename---", filename)
result = svc_m.create_svc(filename, "default")
# converts dictionary to a JSON string
print("result---", result)
json_string = json.dumps(result)
self.wfile.write(json_string.encode(encoding='utf_8'))
def do_DELETE(self):
self._set_headers("application/json")
print(self.path)
if self.path.startswith('/services/'):
name = self.path[10:]
print(name)
result = svc_m.delete_svc(name,"default")
def run(server_class=HTTPServer, handler_class=MyHandler, addr="localhost", port=PORT):
server_address = (addr, port)
httpd = server_class(server_address, handler_class)
print(f"Starting httpd server on {addr}:{port}") # f before string allows special formatting
httpd.serve_forever()
#start
if __name__ == "__main__":
print("---------start----------------")
thread = threading.Thread(target=run)
thread.start()
print("---------end----------------")
# 执行py
python3 api_service_manager.py
python3 service_server.py
# 获取服务器信息
curl -X GET 127.0.0.1:8888/services/kubernetes
2.管理Pod服务(5分)
Kubernetes Python运维脚本开发-使用SDK方式,通过Deployment管理Pod服务。
使用已经部署完成的Kubernetes两节点云平台,在Master节点安装Python 3.7.3的运行环境与依赖库。
使用Kubernetes python SDK的“kubernetes”Python库,在/root目录下,创建sdk_manager_deployment.py文件,要求编写python代码,代码实现以下任务:
①首先使用nginx-deployment.yaml文件创建deployment资源。
②创建完成后,查询该服务的信息,查询的body部分通过控制台输出,并以json格式的文件输出到当前目录下的deployment_sdk_dev.json文件中。
# 复制kube配置文件到当前目录
cp /root/.kube/config .
# 导入image.tar
ctr -n k8s.io images import image.tar
# 编写sdk_manager_deployment.py
vi sdk_manager_deployment.py
import yaml
from kubernetes import client, config
class DeploymentManager:
def __init__(self, kubeconfig_path):
config.load_kube_config(config_file=kubeconfig_path)
self.apps_v1 = client.AppsV1Api()
def create_deployment(self, yaml_file):
with open(yaml_file, 'r') as file:
deployment_spec = yaml.safe_load(file)
response = self.apps_v1.create_namespaced_deployment(
body=deployment_spec,
namespace='default'
)
print(f"Deployment {response.metadata.name} created.")
def get_deployment(self, deployment_name):
response = self.apps_v1.read_namespaced_deployment(
name=deployment_name,
namespace='default'
)
print("Deployment info:")
print(json.dumps(response.to_dict(), indent=4))
# Save to file
with open('deployment_sdk_dev.json', 'w') as f:
json.dump(response.to_dict(), f, indent=4)
if __name__ == '__main__':
DeploymentManager(config_file="config").create_deployment(yamlFile="nginx-deployment.yaml")
DeploymentManager(config_file="config").get_deployment()
# 执行脚本
python3 sdk_job_manager.py
3.Kubernetes CRD自定义资源的管理封装(5分)
在前面已建好的Kubernetes开发环境云平台上。Kubernetes容器云平台通过CRD机制进行自定义APIs资源拓展,将chinaskill-cloud-*.yaml共5个文件复制到root目录下。参考chinaskill-cloud-11.yaml文件,编写CRD文件“chinaskill-cloud-crd.yaml”,放在root目录下。
说明:Competition CRD命名要求如下:
Kind为 Competition
Plural为competitions
singular为competition
shortNames为cpt
session含义是赛程
content含义为竞赛内容。
使用已建好的Kubernetes Python运维开发环境,在/root目录下创建crd _manager.py脚本。crd_manager.py编写基于Kubernetes SDK 实现Competition CRD的创建、删除与事件变化监听。
crd_manager.py内部实现3个方法:
①实现方法create_crd(),实现对Competition CRD的创建。
②实现方法delete_crd(),实现对Competition CRD的删除。
③实现方法watch_crd_object(),实现对CRD资源的变化事件监听,将监听到Competition CRD被删除,将event信息输出到控制台,并停止监听。
# 编辑chinaskill-cloud-crd
vi chinaskill-cloud-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: competitions.chinaskill.cloud
spec:
group: chinaskill.cloud
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
session:
type: string
content:
type: string
status:
type: object
names:
plural: competitions
singular: competition
kind: Competition
shortNames:
- cpt
scope: Namespaced
# 编辑crd_manager
vi crd_manager.py
from kubernetes import client, config, watch
def create_crd(crd_yaml):
# 加载kube-config
config.load_kube_config(config_file=kubeconfig_path)
# 创建API实例
apiextensions_v1 = client.CustomObjectsApi()
# 创建CRD
try:
apiextensions_v1.create_namespaced_custom_object(
group='apiextensions.k8s.io',
version='v1',
namespace='default',
plural='customresourcedefinitions',
body=crd_yaml,
)
print("Competition CRD created successfully")
except client.rest.ApiException as e:
print(f"Exception when calling CustomObjectsApi->create_namespaced_custom_object: {e}")
def delete_crd():
config.load_kube_config()
apiextensions_v1 = client.ApiextensionsV1Api()
try:
apiextensions_v1.delete_custom_resource_definition("competitions.chinaskill.cloud", body=client.V1DeleteOptions())
print("Competition CRD deleted successfully")
except client.rest.ApiException as e:
print(f"Exception when calling ApiextensionsV1Api->delete_custom_resource_definition: {e}")
def watch_crd_object():
config.load_kube_config()
w = watch.Watch()
for event in w.stream(client.CoreV1Api().list_namespaced_event_for_all_namespaces, label_selector="involvedObject.kind=Competition"):
print(f"Event: {event['type']} {event['object'].kind} {event['object'].metadata.name}")
if event['type'] == 'DELETED' and event['object'].kind == 'Competition':
print(f"Competition {event['object'].metadata.name} was deleted")
w.stop()
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python3 crd_manager.py <action> [args]")
print("Actions: create_crd, delete_crd, watch_crd_object")
sys.exit(1)
action = sys.argv[1]
if action == 'create_crd':
create_crd(config_file="config", crd_yaml="chinaskill-cloud-crd.yaml")
elif action == 'delete_crd':
delete_crd()
elif action == 'watch_crd_object':
watch_crd_object()
else:
print(f"Unknown action: {action}")
# 调用
python3 crd_manager.py create_crd
python3 crd_manager.py delete_crd
python3 crd_manager.py watch_crd_object