当前位置: 首页 > article >正文

基于openEuler22.09部署OpenStack Yoga云平台(一)

OpenStack Yoga部署

安装OpenStack

一、基础准备

基于OpenStack经典的三节点环境进行部署,三个节点分别是控制节点(controller)、计算节点(compute)、存储节点(storage),其中存储节点一般只部署存储服务,在资源有限的情况下,可以不单独部署该节点,把存储节点上的服务部署到计算机节点即可。

首先准备三个openEuler22.09环境,节点规划如下:

虚拟机版本

主机名

IP地址

openEuler22.09

Controller

192.168.110.150

openEuler22.09

Compute

192.168.110.151

openEuler22.09

Storage

192.168.110.152

1.1yum源配置:

打开/etc/yum.repos.d/openEuler.repo文件,检查[EPOL]源是否存在,若不存在,则添加如下内容

[root@controller ~]# vi /etc/yum.repos.d/openEuler.repo

[EPOL]

name=EPOL

baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/EPOL/main/$basearch/

enabled=1

gpgcheck=1

gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler

更新yum源

[root@controller ~]# yum update

若在更新yum源时出现如下图所示问题,则需要更新仓库URL

将 /etc/yum.repos.d/openEuler.repo文件下的URL地址替换成

https://archives.openeuler.openatom.cn/openEuler-22.09/

更换完成之后,保存退出。

1.2修改主机映射

修改每个节点的/etc/hosts文件,新增如下内容:

192.168.110.150 controller

192.168.110.151 compute

192.168.110.152 storage

1.3设置时间同步

(1)controller节点:

安装服务:

[root@controller ~]# dnf install chrony

修改/etc/chrony.conf配置文件,新增如下内容:

allow 192.168.110.0/24

#表示允许哪些IP从本节点同步时钟

重启服务

[root@controller ~]# systemctl restart chronyd

(2)其他节点:

安装服务:

[root@compute ~]# dnf install chrony

[root@storage ~]# dnf install chrony

修改/etc/chrony.conf配置文件,新增一行;同时把 pool pool.ntp.org iburst

 这一行注释掉,表示不从公网同步时钟。

server 192.168.110.150 iburst

表示从这个机器获取时间。

重启服务:

[root@compute ~]# systemctl restart chronyd

[root@storage ~]# systemctl restart chronyd

配置完成后,在其他非controller节点执行chronyc sources,返回结果类似如下内容,表示成功的从controller同步时钟

1.4安装数据库

数据库安装在控制节点,这里推荐使用MariaDB。

(1)安装软件包

[root@controller ~]# dnf install mysql-config mariadb mariadb-server python3-PyMySQL

(2)新增配置文件/etc/my.cnf.d/openstack.cnf,内容如下:

[mysqld]

bind-address = 192.168.110.150   #本机ip地址

default-storage-engine = innodb

innodb_file_per_table = on

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

(3)启动服务器

[root@controller ~]# systemctl start mariadb

(4)初始化数据库

[root@controller ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB

      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current

password for the root user. If you've just installed MariaDB, and

haven't set the root password yet, you should just press enter here.

Enter current password for root (enter for none):  #这里输入密码,由于我们初始化DB,所以直接回车即可

OK, successfully used password, moving on...

Setting the root password or using the unix_socket ensures that nobody

can log into the MariaDB root user without the proper authorisation.

You already have your root account protected, so you can safely answer 'n'.

Switch to unix_socket authentication [Y/n] y

Enabled successfully!

Reloading privilege tables..

 ... Success!

You already have your root account protected, so you can safely answer 'n'.

Change the root password? [Y/n] y   #输入y,修改密码

New password:                   #密码一定要记住!!!

Re-enter new password:

Password updated successfully!

Reloading privilege tables..

 ... Success!

By default, a MariaDB installation has an anonymous user, allowing anyone

to log into MariaDB without having to have a user account created for

them.  This is intended only for testing, and to make the installation

go a bit smoother.  You should remove them before moving into a

production environment.

Remove anonymous users? [Y/n] y     #输入y,删除匿名用户

 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This

ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n     #输入y,关闭root远程登录权限

 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can

access.  This is also intended only for testing, and should be removed

before moving into a production environment.

Remove test database and access to it? [Y/n] y     #输入y,删除test数据库

 - Dropping test database...

 ... Success!

 - Removing privileges on test database...

 ... Success!

Reloading the privilege tables will ensure that all changes made so far

will take effect immediately.

Reload privilege tables now? [Y/n] y   #输入y,重新加载

 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB

installation should now be secure.

Thanks for using MariaDB!

(5)验证是否可以登录mariadb

[root@controller ~]# mysql -uroot -p

Enter password:

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 12

Server version: 10.5.16-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

1.5安装消息列队

消息列队安装在控制节点,这里推荐使用rabbitmq。

(1)安装软件包

[root@controller ~]# dnf install rabbitmq-server

(2)启动服务

[root@controller ~]# systemctl start rabbitmq-server

(3)配置OpenStack用户,RABBIT_PASS是OpenStack服务登录消息队里的密码,需要和后面各个服务的配置保持一致

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS

Adding user "openstack" ...

Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...

1.6安装缓存服务

缓存列队安装在控制节点,这里推荐使用Memcached。

(1)安装软件包

[root@controller ~]# dnf install memcached python3-memcached

(2)修改配置文件/etc/sysconfig/memcached,

PORT="11211"

USER="memcached"

MAXCONN="1024"

CACHESIZE="64"

OPTIONS="-l 127.0.0.1,::1,controller"

(3)启动服务

[root@controller ~]# systemctl start memcached

二、部署服务

2.1Keystone

Keystone是OpenStack提供的鉴权服务,是整个OpenStack的入口,提供了租户隔离、用户认证、服务发现等功能,必须安装。

(1)创建keystone数据库并授权

MariaDB [(none)]> CREATE DATABASE keystone;

Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \

    -> IDENTIFIED BY 'KEYSTONE_DBPASS';

Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \

    -> IDENTIFIED BY 'KEYSTONE_DBPASS';

Query OK, 0 rows affected (0.001 sec)

#替换'KEYSTONE_DBPASS',为keystone数据库设置的密码

(2)安装软件包

[root@controller ~]# dnf install openstack-keystone httpd mod_wsgi

(3)配置keystone相关配置

[database]

connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

[token]

provider = fernet

#[database]部分,配置数据库入口

#[token]部分,配置token provider

(4)同步数据库

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

(5)初始化Fernet密钥仓库

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

(6)启动服务

[root@controller ~]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \

> --bootstrap-admin-url http://controller:5000/v3/ \

> --bootstrap-internal-url http://controller:5000/v3/ \

> --bootstrap-public-url http://controller:5000/v3/ \

> --bootstrap-region-id RegionOne

#ADMIN_PASS为admin用户设置的密码

(7)配置Apache HTTP Server

打开httpd.conf并配置(/etc/httpd/conf/httpd.conf)

#修改以下项,如果没有则新添加

ServerName controller

创建软链接

[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

(8)启动Apache HTTP服务

[root@controller ~]# systemctl enable httpd

Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.

[root@controller ~]# systemctl start httpd

(9)创建环境变量配置

[root@controller ~]# cat << EOF >> ~/.admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=ADMIN_PASS

export OS_AUTH_URL=http://controller:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

EOF

#替换ADMIN_PASS为admin用户的密码

(10)依次创建domain,projects,users,roles

安装python3-openstackclient

[root@controller ~]# dnf install python3-openstackclient

导入环境变量

[root@controller ~]# source ~/.admin-openrc

创建protect service,其中domain default在keystone-manage bootstrap时已创建

[root@controller ~]# openstack domain create --description "An Example Domain" example

[root@controller ~]# openstack project create --domain default --description "Service Project" service

创建(non-admin)project myproject,user myuser和role myrole,为myproject和myuser添加角色myrole

[root@controller ~]# openstack project create --domain default --description "Demo Project" myproject

[root@controller ~]# openstack user create --domain default --password-prompt myuser

[root@controller ~]# openstack role create myrole

[root@controller ~]# openstack role add --project myproject --user myuser myrole

验证

取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

[root@controller ~]# source ~/.admin-openrc

[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD

为admin用户请求token:

[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \

> --os-project-domain-name Default --os-user-domain-name Default \

> --os-project-name admin --os-username admin token issue

为myuser用户请求token:

[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \

> --os-project-domain-name Default --os-user-domain-name Default \

> --os-project-name myproject --os-username myuser token issue

2.2Glance

Glance是OpenStack提供的镜像服务,负责虚拟机、裸机镜像的上传与下载,必须安装

Controller节点

(1)创建glance数据库并授权:

MariaDB [(none)]> CREATE DATABASE glance;

Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \

    -> IDENTIFIED BY 'GLANCE_PASS';

Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \

    -> IDENTIFIED BY 'GLANCE_DBPASS';

Query OK, 0 rows affected (0.001 sec)

#替换GLANCE_PASS为glance数据库设置密码

(2)初始化glance资源对象

导入环境变量

[root@controller ~]# source ~/.admin-openrc

创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到GLANCE_PASS的地方替换成该密码即可。

[root@controller ~]# openstack user create --domain default --password-prompt glance

User Password:

Repeat User Password:

(3)添加glance用户到service project并指定admin角色:

[root@controller ~]# openstack role add --project service --user glance admin

(4)创建glance服务实体:

[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

(5)创建glance API服务:

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292

[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292

[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

(6)安装软件包

[root@controller ~]# dnf install openstack-glance

(7)修改glance配置文件

[database]

connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]

www_authenticate_uri  = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = glance

password = GLANCE_PASS

[paste_deploy]

flavor = keystone

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

#[database]部分,配置数据库入口

#[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

#[glance_store]部分,配置本地文件系统存储和镜像文件的位置

(8)同步数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

(9)启动服务

[root@controller ~]# systemctl enable openstack-glance-api.service

Created symlink /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service → /usr/lib/systemd/system/openstack-glance-api.service.

[root@controller ~]# systemctl start openstack-glance-api.service

(10)验证

导入环境变量

[root@controller ~]#  source ~/.admin-openrc

下载镜像

X86镜像下载:

[root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

Arm镜像下载:

[root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img

向Image服务上传镜像

[root@controller ~]# openstack image create --disk-format qcow2 --container-format bare \

                    --file cirros-0.4.0-x86_64-disk.img --public cirros

确认镜像上传并验证属性

[root@controller ~]# openstack image list

2.3Placement

Placement是OpenStack提供的资源调度组件,一般不面向用户,由Nova等组件调用,安装在控制节点。

安装、配置placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。

(1)创建数据库

使用root用户访问数据库服务:

[root@controller ~]# mysql -uroot -p

创建placement数据库并授权

MariaDB [(none)]> CREATE DATABASE placement;

Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \

    ->   IDENTIFIED BY 'PLACEMENT_PASS';

Query OK, 0 rows affected (0.002 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \

    ->   IDENTIFIED BY 'PLACEMENT_DBPASS';

Query OK, 0 rows affected (0.001 sec)

#替换PLACEMENT_DBPASS为placement数据库访问密码

(2)配置用户和Endpoints

Source admin凭证,以获取admin命令行权限:

[root@controller ~]# source ~/.admin-openrc

创建placement用户并设置用户密码

[root@controller ~]# openstack user create --domain default --password-prompt placement

User Password:

Repeat User Password:

添加placement用户到service project并指定admin角色

[root@controller ~]# openstack role add --project service --user placement admin

创建placement服务实体

[root@controller ~]# openstack service create --name placement \

  --description "Placement API" placement

创建placement API服务endpoints

[root@controller ~]# openstack endpoint create --region RegionOne \

  placement public http://controller:8778

[root@controller ~]# openstack endpoint create --region RegionOne \

  placement internal http://controller:8778

[root@controller ~]# openstack endpoint create --region RegionOne \

  placement admin http://controller:8778

(3)安装及配置组件

安装软件包

[root@controller ~]# dnf install openstack-placement-api

编辑/etc/placement/placement.conf配置文件,完成如下操作:

在[placement_database]部分,配置数据库入口:

[placement_database]

connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

在[api]和[keystone_authtoken]部分,配置身份认证服务入口

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = placement

password = PLACEMENT_PASS

数据库同步,填充placement数据库

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

(4)重启服务

[root@controller ~]# systemctl restart httpd

(5)验证

Source admin凭证,以获取admin命令行权限

[root@controller ~]# source ~/.admin-openrc

执行状态检查

[root@controller ~]# placement-status upgrade check

这里可以看到Policy File JSON to YAML Migration 的结果为Failure。这是因为在placement中,JSON格式的Policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具将现有的JSON格式Policy文件转化为YAML格式。

[root@controller ~]# oslopolicy-convert-json-to-yaml  --namespace placement \

> --policy-file /etc/placement/policy.json \

>  --output-file /etc/placement/policy.yaml

[root@controller ~]# mv /etc/placement/policy.json{,.bak}

注:当前环境中此问题可忽略,不影响运行

针对placement API运行命令

安装osc-placement插件

[root@controller ~]# dnf install python3-osc-placement

列出可用的资源类别及特性:

[root@controller ~]# openstack --os-placement-api-version 1.2 resource class list --sort-column name

[root@controller ~]# openstack --os-placement-api-version 1.6 trait list --sort-column name

2.4Nova

Nova是OpenStack的计算服务,负责虚拟机的创建、发送等功能。

Controller节点

在控制节点执行以下操作

(1)创建数据库

使用root用户访问数据库服务

[root@controller ~]# mysql -u root -p

创建nova_api、nova和nova_cell0数据库

MariaDB [(none)]> CREATE DATABASE nova_api;

Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> CREATE DATABASE nova;

Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> CREATE DATABASE nova_cell0;

Query OK, 1 row affected (0.001 sec)

授权数据库访问

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \

    ->   IDENTIFIED BY 'NOVA_DBPASS';

Query OK, 0 rows affected (0.003 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \

    ->   IDENTIFIED BY 'NOVA_DBPASS';

Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \

    ->   IDENTIFIED BY 'NOVA_DBPASS';

Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \

    ->   IDENTIFIED BY 'NOVA_DBPASS';

Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \

    ->   IDENTIFIED BY 'NOVA_DBPASS';

Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \

    ->   IDENTIFIED BY 'NOVA_DBPASS';

Query OK, 0 rows affected (0.001 sec)

(2)配置用户和endpoints

Source admin凭证,以获取admin命令行权限

[root@controller ~]# source ~/.admin-openrc

创建nova用户并设置用户密码

[root@controller ~]# openstack user create --domain default --password-prompt nova

User Password:

Repeat User Password:

添加nova用户到service project并指定admin角色

[root@controller ~]# openstack role add --project service --user nova admin

创建nova服务实体

[root@controller ~]# openstack service create --name nova \

  --description "OpenStack Compute" compute

创建Nova API服务endpoints

[root@controller ~]# openstack endpoint create --region RegionOne \

  compute public http://controller:8774/v2.1

[root@controller ~]# openstack endpoint create --region RegionOne \

  compute internal http://controller:8774/v2.1

[root@controller ~]# openstack endpoint create --region RegionOne \

  compute admin http://controller:8774/v2.1

(3)安装及配置组件

安装软件包

[root@controller ~]# dnf install openstack-nova-api openstack-nova-conductor \

  openstack-nova-novncproxy openstack-nova-scheduler

编辑/etc/nova/nova.conf配置文件,完成如下操作:

在[default]部分,启用计算和元数据的API,配置RebbitMQ消息队列入口,使用controller节点管理ip配置my_ip,显示定义log_dir:

[DEFAULT]

enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/

my_ip = 192.168.110.150

log_dir = /var/log/nova

在[api_database]和[database]部分,配置数据库入口:

[api_database]

connection = mysql+pymysql://nova:RABBIT_DBPASS@controller/nova_api

[database]

connection = mysql+pymysql://nova:RABBIT_DBPASS@controller/nova

在[api]和[keystone_authtoken]部分,配置身份认证服务入口:

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = nova

password = NOVA_PASS

在[vnc]部分,启用并配置远程控制台入口:

[vnc]

enabled = true

server_listen = $my_ip

server_proxyclient_address = $my_ip

在[glance]部分,配置镜像服务API的地址:

[glance]

api_servers = http://controller:9292

在[oslo_concurrency]部分,配置lock path:

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

在[placement]部分,配置placement服务的入口:

[placement]

region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:5000/v3

username = placement

password = PLACEMENT_PASS

数据库同步:

同步nova_api数据库:

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库;

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1 cell:

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

同步nova数据库:

[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

验证cell0和cell1注册正确:

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

(4)启动服务

[root@controller ~]# systemctl enable \

> openstack-nova-api.service \

> openstack-nova-scheduler.service \

>  openstack-nova-conductor.service \

>  openstack-nova-novncproxy.service

[root@controller ~]# systemctl start \

>  openstack-nova-api.service \

>  openstack-nova-scheduler.service \

>  openstack-nova-conductor.service \

>   openstack-nova-novncproxy.service

Compute节点

在计算节点执行以下操作

(1)安装软件包

[root@compute ~]# dnf install openstack-nova-compute

(2)编辑/etc/nova/nova.conf配置文件

在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理ip配置my_ip,显式定义compute_driver、instances_path、log_dir:

[DEFAULT]

enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/

my_ip = 192.168.110.151

compute_driver = libvirt.LibvirtDriver

instances_path = /var/lib/nova/instances

log_dir = /var/log/nova

在[api]和[keystone_authtoken]部分,配置身份认证服务入口:

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = nova

password = NOVA_PASS

在[vnc]部分,启用并配置远程控制台入口:

[vnc]

enabled = true

server_listen = $my_ip

server_proxyclient_address = $my_ip

novncproxy_base_url = http://controller:6080/vnc_auto.html

在[glance]部分,配置镜像服务API的地址;

[glance]

api_servers = http://controller:9292

在[oslo_concurrency]部分,配置lock path:

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[plancement]部分,配置placement服务的入口:

[placement]

region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:5000/v3

username = placement

password = PLACEMENT_PASS

确认计算节点是否支持虚拟机硬件加速(X86_64)

处理器为X86_64架构时,可通过运行如下命令确认是否支持硬件加速:

[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo

0

如果返回值为0,则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf的[libvirt]部分:

[libvirt]

virt_type = qemu

如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

(4)确认计算节点是否支持虚拟机硬件加速(arm64)

处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:

[root@compute ~]# virt-host-validate

显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM:

编辑/etc/nova/nova.conf的[libvirt]部分:

[libvirt]

virt_type = qemu

显示PASS时,表示支持硬件加速,不需要进行额外的配置

(5)配置qemu(仅arm64)

仅当处理器为arm64架构时需要执行此操作

编辑/etc/libvirt/qemu.conf:

nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \

         /usr/share/AAVMF/AAVMF_VARS.fd", \

         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \

         /usr/share/edk2/aarch64/vars-template-pflash.raw"]

编辑/etc/qemu/firmware/edk2-aarch64.json

{

    "description": "UEFI firmware for ARM64 virtual machines",

    "interface-types": [

        "uefi"

    ],

    "mapping": {

        "device": "flash",

        "executable": {

            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",

            "format": "raw"

        },

        "nvram-template": {

            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",

            "format": "raw"

        }

    },

    "targets": [

        {

            "architecture": "aarch64",

            "machines": [

                "virt-*"

            ]

        }

    ],

    "features": [

    ],

    "tags": [

    ]

}

(6)启动服务

[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service

Created symlink /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service → /usr/lib/systemd/system/openstack-nova-compute.service.

[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service

Controller节点

在控制节点执行以下操作

(1)添加计算节点到OpenStack集群

Source admin凭证,以获取admin命令行权限:

[root@controller ~]# source ~/.admin-openrc

确认nova-compute服务已识别到数据库中

[root@controller ~]# openstack compute service list --service nova-compute

发现计算节点,将计算节点添加到cell数据库:

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

验证

列出服务组件,验证每个流程都成功起动和注册:

[root@controller ~]# openstack compute service list

列出身份服务中的API端点,验证与身份服务的连接;

[root@controller ~]# openstack catalog list

列出镜像服务中的镜像,验证与镜像服务的连接:

[root@controller ~]# openstack image list

检查cells是否运作成功,以及其他必要条件是否已具备

[root@controller ~]# nova-status upgrade check

2.5 Neutron

Neutron 是 OpenStack 中的网络服务组件,负责为 OpenStack 环境提供网络连接和 IP 地址管理。它允许用户创建和管理虚拟网络、子网、路由器、安全组等网络资源,从而为虚拟机(VM)提供网络功能。

Controller节点

(1)创建keystone数据库并赋权

MariaDB [(none)]>  CREATE DATABASE neutron;

Query OK, 1 row affected (0.008 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';

Query OK, 0 rows affected (0.012 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';

Query OK, 0 rows affected (0.001 sec)

(2)设置环境变量

[root@controller ~]# source ~/.admin-openrc

(3)创建用户和服务,并记住创建neutron用户时输入的密码,用于配置NEUTRON_PASS:

[root@controller ~]#  openstack user create --domain default --password-prompt neutron

User Password:

Repeat User Password:

[root@controller ~]#  openstack role add --project service --user neutron admin

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

(4)部署Neutron API服务

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696

[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696

[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696

(5)安装软件包

[root@controller ~]# dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2

(6)配置Neutron

[database]

connection = mysql+pymysql://neutron:NEUTRON_PASS@controller/neutron

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

transport_url = rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = neutron

password = NEUTRON_PASS

[nova]

auth_url = http://controller:5000

auth_type = password

project_domain_name = Default

user_domain_name = Default

region_name = RegionOne

project_name = service

username = nova

password = NOVA_PASS

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

配置ML2,ML2,具体配置可以根据需求自行修改,这里使用的是provider network + linuxbridge**

修改/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = true

修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

[vxlan]

enable_vxlan = true

local_ip = OVERLAY_INTERFACE_IP_ADDRESS

l2_population = true

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置Layer-3代理

修改/etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = linuxbridge

配置DHCP代理 修改/etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

配置metadata代理

修改/etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_host = controller

metadata_proxy_shared_secret = METADATA_SECRET

配置nova服务使用neutron,修改/etc/nova/nova.conf

[neutron]

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = Ywj035916!

service_metadata_proxy = true

metadata_proxy_shared_secret = METADATA_SECRET

创建/etc/neutron/plugin.ini的符号链接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

(7)同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

(8)重启nova API服务

[root@controller ~]# systemctl restart openstack-nova-api

(9)启动网络连接

[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service \

> neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service \

> neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

Compute节点

(1)安装软件包

[root@compute ~]# dnf install openstack-neutron-linuxbridge ebtables ipset -y

(2)配置Neutron

修改/etc/neutron/neutron.conf

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = neutron

password = NEUTRON_PASS

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:ens33

[vxlan]

enable_vxlan = true

local_ip = 192.168.110.151

l2_population = true

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置nova compute服务使用neutron,修改/etc/nova/nova.conf

[neutron]

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = NEUTRON_PASS

(3)重启nova-compute服务

[root@compute ~]# systemctl restart openstack-nova-compute.service

(4)启动Neutron linuxbridge agent服务

[root@compute ~]# systemctl enable neutron-linuxbridge-agent

Created symlink /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service → /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

[root@compute ~]# systemctl start neutron-linuxbridge-agent

2.6 Cinder

"Cinder" 是 OpenStack 项目中的一个核心组件,负责块存储(Block Storage)服务。它是 OpenStack 的存储服务模块,允许用户创建和管理持久化的块存储卷(volumes),这些卷可以附加到虚拟机(VMs)上,作为虚拟机的存储设备

Controller节点

(1)创建cinder数据库

MariaDB [(none)]> CREATE DATABASE cinder;

Query OK, 1 row affected (0.003 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';

Query OK, 0 rows affected (0.017 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';

Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> exit

(2)初始化Keystone资源对象

[root@controller ~]# source ~/.admin-openrc

[root@controller ~]# openstack user create --domain default --password-prompt cinder

User Password:

Repeat User Password:

[root@controller ~]# openstack role add --project service --user cinder admin

[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

(3)安装软件包

[root@controller ~]# dnf install openstack-cinder-api openstack-cinder-scheduler

(4)修改cinder配置文件/etc/cinder/cinder.conf

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

my_ip = 192.168.110.150

[database]

connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = cinder

password = CINDER_PASS

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

(5)数据库同步

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

(6)修改nova配置/etc/nova/nova.conf

[cinder]

os_region_name = RegionOne

(7)启动服务

[root@controller ~]# systemctl restart openstack-nova-api

[root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-scheduler

Storage节点

Storage节点要提前准备至少一块硬盘,作为cinder的存储后端

下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sdb

(1)安装软件包

[root@storage ~]# dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup

(2)配置lvm卷组

[root@storage ~]# pvcreate /dev/sdb

[root@storage ~]# vgcreate cinder-volumes /dev/sdb

(3)修改cinder配置/etc/cinder/cinder.conf

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

my_ip = 192.168.110.152

enabled_backends = lvm

glance_api_servers = http://controller:9292

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cinder

password = CINDER_PASS

[database]

connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[lvm]

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group = cinder-volumes

target_protocol = iscsi

target_helper = lioadm

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

(4)启动服务

[root@controller ~]# systemctl restart openstack-nova-api

[root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-schedulr

(5)验证

Controller节点

[root@controller ~]# source ~/.admin-openrc

[root@controller ~]# openstack volume service list

创建一个卷来验证配置是否正确

[root@controller ~]# openstack volume create --size 1 test-volume

[root@controller ~]# openstack volume list

2.7 Horizon

orizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。

(1)安装软件包

[root@controller ~]#  dnf install openstack-dashboard

(2)修改配置文件/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*', ]

OPENSTACK_KEYSTONE_URL =  "http://controller:5000/v3"

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {

'default': {

    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

    'LOCATION': 'controller:11211',

    }

}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"

WEBROOT = '/dashboard'

POLICY_FILES_PATH = "/etc/openstack-dashboard"

OPENSTACK_API_VERSIONS = {

    "identity": 3,

    "image": 2,

    "volume": 3,

}

重启服务

[root@controller ~]# systemctl restart httpd


http://www.kler.cn/a/456299.html

相关文章:

  • STM32-笔记16-定时器中断点灯
  • 什么是网络安全(Cybersecurity)?
  • 前端项目 npm报错解决记录
  • STM32完全学习——FLASH上FATFS文件管理系统
  • 日本IT行业|分享实用的开发语言及框架
  • OpenCV调整图像亮度和对比度
  • 《机器视觉:开启智能新时代》
  • MySQL学习之表查询操作
  • Express.js 有哪些常用的中间件?
  • 【Flutter_Web】Flutter编译Web第三篇(网络请求篇):dio如何改造方法,变成web之后数据如何处理
  • 【Java】线程相关面试题 (基础)
  • 工业安全监测审计系统(源码+文档+部署+讲解)
  • 【我的 PWN 学习手札】IO_FILE 之 stdout任意地址读
  • 普通的树形数据primevue的treetable组件的treetable[ ]
  • android系统查找应用包名以及主activity:
  • WPF 绘制过顶点的圆滑曲线(样条,贝塞尔)
  • 创建用于预测序列的人工智能模型,用Keras Tuner探索模型的超参数。
  • PDF书籍《手写调用链监控APM系统-Java版》第8章 插件与链路的结合:Gson插件实现
  • Arcgis中python工具箱制造要点及统计要素图层字段信息工具分享
  • 【每日学点鸿蒙知识】组件封装通用方法、callback和await性能对比、Web组件下拉刷新、hsp包报错、WebView圆角
  • 使用 Three.js 创建一个 3D 人形机器人仿真系统
  • 基于MATLAB的冰箱水果保鲜识别系统
  • RustDesk内置ID服务器,Key教程
  • 洛谷 P2422:良好的感觉 ← 单调队列+前缀和
  • 【PPTist】表格功能
  • PyTorch框架——基于深度学习LYT-Net神经网络AI低光图像增强系统源码