nebula graph传统使用Docker进行项目发版
nebula graph传统使用Docker进行项目发版
- 1. nebula graph服务
- 2. 搭建ES集群
- 3. 注意事项
- 3.1 图数据库的启动顺序
- 3.2 模糊查询失效
1. nebula graph服务
1.在测试服务器中执行如下命令
docker commit 85b6e2b8xxx xxx_nebula_es:1.0.0.2
执行docker images之后能看到新的镜像 xxx_nebula_es:1.0.0.2
这里将测试服务器中的容器打包成镜像。85b6e2b8xxx
就是某个服务的容器ID(CONTAINER ID)
然后将新的镜像保存成tar包存到测试服务器中
docker save -o ./xxx_nebula_es.tar xxx_nebula_es:1.0.0.2
容器打包后的image包导入到服务器中,这里使用其他电脑模拟服务器
导入到镜像中
(base) ➜ Documents docker load -i xxx_nebula_es.tar
174f56854903: Loading layer [==================================================>] 211.7MB/211.7MB
fff8cc8b80b2: Loading layer [==================================================>] 249.3MB/249.3MB
cf0ef10a32ea: Loading layer [==================================================>] 65.41MB/65.41MB
2119c420d03d: Loading layer [==================================================>] 6.687GB/6.687GB
4e15763e63e9: Loading layer [==================================================>] 4.249GB/4.249GB
1d7f86007746: Loading layer [==================================================>] 12.49GB/12.49GB
27e6368f9369: Loading layer [==================================================>] 7.737GB/7.737GB
Loaded image: xxx_nebula_es:1.0.0.2
(base) ➜ Documents docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xxx_nebula_es 1.0.0.2 3a10a2c46591 About an hour ago 31.7GB
docker-test dev 249cb83092ff 2 months ago 229MB
docker-test prod 249cb83092ff 2 months ago 229MB
nginx latest 5e0fa356e6f4 3 months ago 197MB
weygu/nebulagraph-jupyter v0 621540960f3a 7 months ago 1.74GB
weygu/nebulagraph-gephi-exchange v0.1 46adca927168 9 months ago 643MB
weygu/nebulagraph-dd-ext 0.4.21 019e494cfd0f 10 months ago 9.17MB
vesoft/nebula-metad v3.8.0 ed1af2e7d732 10 months ago 382MB
vesoft/nebula-storaged v3.8.0 5dc54c63c7ab 10 months ago 383MB
vesoft/nebula-graphd v3.8.0 7738d42b9445 10 months ago 377MB
vesoft/nebula-graph-studio latest ef5c7a587b3f 10 months ago 85.2MB
vesoft/nebula-graph-studio v3 ef5c7a587b3f 10 months ago 85.2MB
harbor1.vm.example.lan/dockerio/vesoft/nebula-graph-studio v3.10.0 52eb2ad53a4c 10 months ago 85.8MB
weygu/nebula-webconsole latest 457e03b2bf04 18 months ago 34.2MB
openjdk 8-jre-slim 252e7e45ba12 2 years ago 187MB
openjdk 8-jdk-alpine e4105db9d469 5 years ago 103MB
运行images
docker run -itd -p 9200:9200 -p 9300:9300 -p 59647:9669 --name xxx_nebula_es --privaleged
xxx_nebula_es:1.0.0.2/bin/bash
解释这个命令
-
docker run 是用于创建并运行一个新的容器的命令
-
-itd
-i (interactive)
:保持容器的标准输入打开,即使没有连接。
-t (tty)
:为容器分配一个伪终端,方便交互。
-d (detached)
:后台运行容器,而不是附加到终端。 -
-p 9200:9200 -p 9300:9300 -p 59647:9669
端口映射,将宿主机端口映射到容器内部的端口:
9200:9200:宿主机的 9200 端口映射到容器的 9200 端口(通常用于 Elasticsearch)。
9300:9300:宿主机的 9300 端口映射到容器的 9300 端口(用于 Elasticsearch 集群通信)。
59647:9669:宿主机的 59647 端口映射到容器的 9669 端口(用于 Nebula Graph)。 -
--name xxx_nebula_es
指定容器名称为 xxx_nebula_es,方便后续管理。 -
--privileged
赋予容器 特权模式,允许容器访问宿主机的所有设备,适用于需要高权限的应用(例如运行 Docker 内嵌 Docker,或者访问宿主机硬件)。 -
xxx_nebula_es:1.0.0.2
这是镜像的名称和标签:
xxx_nebula_es 是镜像名。
1.0.0.2 是镜像的版本号。
/bin/bash 在容器内执行 /bin/bash,通常用于保持容器运行,或者提供交互式 shell 访问。
注意:nebula graph的镜像会带着nebulagraph的全部数据,如果要替换,先把服务器中原来的nebulagraph容器停止与删除,原来的镜像也可以干掉
2. 搭建ES集群
配置一个单主机3个ES服务的集群,3台集群的yml配置如下
进入nebula graph命令
docker exec -it xxx_nebula_es /bin/bash
在解压ES之后可以通过cp扩展另两个es
cp -r elasticsearch-7.15.0 elasticsearch-7.15.0-9201
cp -r elasticsearch-7.15.0 elasticsearch-7.15.0-9202
第一台
# cluster name
cluster.name: my-application
#data save path
path.data: ./data
#log path
path.logs: /usr/local/es/elasticsearch-7.15.0/logs
node.name: node-1
# allow all node request
network.host: 0.0.0.0
# port
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]
discovery.zen.minimum_master_nodes: 2
node.master: true
node.data: true
cluster.initial_master_nodes: ["node-1", "node-2","node-3"]
#discovery.type: single-node
#discovery.seed_hosts: ["172.17.0.20"]
#Disable GeoIP data download
ingest.geoip.downloader.enabled: false
xpack.security.enabled: false
#cluster.max_shards_per_node: 2000
http.cors.enabled: true
http.cors.allow-origin: "*"
node.max_local_storage_nodes: 3
第二台
cluster.name: my-application
path.data: /usr/local/es/elasticsearch-7.15.0-9201/data
path.logs: /usr/local/es/elasticsearch-7.15.0-9201/logs
node.name: node-2
network.host: 0.0.0.0
http.port: 9201
transport.tcp.port: 9301
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]
# discovery.zen.ping.unicast.hosts: ["172.17.0.8:9300","172.17.0.8:9301",172.17.0.8:9302"]
discovery.zen.minimum_master_nodes: 2
node.master: false
node.data: true
cluster.initial_master_nodes: ["node-1", "node-2","node-3"]
#discovery.type: single-node
#discovery.seed_hosts: ["172.17.0.20"]
#Disable GeoIP data download
ingest.geoip.downloader.enabled: false
xpack.security.enabled: false
#cluster.max_shards_per_node: 2000
http.cors.enabled: true
http.cors.allow-origin: "*"
node.max_local_storage_nodes: 3
第三台
cluster.name: my-application
path.data: /usr/local/es/elasticsearch-7.15.0-9202/data
path.logs: /usr/local/es/elasticsearch-7.15.0-9202/logs
node.name: node-3
network.host: 0.0.0.0
http.port: 9202
transport.tcp.port: 9302
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]
# discovery.zen.ping.unicast.hosts: ["172.17.0.8:9300","172.17.0.8:9301",172.17.0.8:9302"]
discovery.zen.minimum_master_nodes: 2
node.master: false
node.data: true
cluster.initial_master_nodes: ["node-1", "node-2","node-3"]
#discovery.type: single-node
#discovery.seed_hosts: ["172.17.0.20"]
#Disable GeoIP data download
ingest.geoip.downloader.enabled: false
xpack.security.enabled: false
#cluster.max_shards_per_node: 2000
http.cors.enabled: true
http.cors.allow-origin: "*"
node.max_local_storage_nodes: 3
因为es启动要使用非root用户,所以在root用户时记得修改文件的权限,不然es不能启动
rm -rf /usr/local/es/elasticsearch-7.15.0-9202/data/*
rm -rf /usr/local/es/elasticsearch-7.15.0-9201/data/*
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9201/logs
chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9201/logs
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9202/logs
chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9202/logs
[root@c266a323c916 /]# chmod 644 /usr/local/es/elasticsearch-7.15.0-9201/config/elasticsearch.yml
[root@c266a323c916 /]#
[root@c266a323c916 /]# chown es:es /usr/local/es/elasticsearch-7.15.0-9201/config/elasticsearch.yml
chown es:es /usr/local/es/elasticsearch-7.15.0-9201/config/elasticsearch.keystore
[root@c266a323c916 /]# chown -R es:es /usr/local/es/elasticsearch-7.15.0-9201/config/jvm.options.d
[root@c266a323c916 /]# chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9201/config/jvm.options.d
chmod 644 /usr/local/es/elasticsearch-7.15.0-9201/config/jvm.options
chmod 644 /usr/local/es/elasticsearch-7.15.0-9201/config/log4j2.properties
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9201/config/
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9201/data
chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9201/data
==========================================
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9202/logs
chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9202/logs
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9202/logs
chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9202/logs
[root@c266a323c916 /]# chmod 644 /usr/local/es/elasticsearch-7.15.0-9202/config/elasticsearch.yml
[root@c266a323c916 /]#
[root@c266a323c916 /]# chown es:es /usr/local/es/elasticsearch-7.15.0-9202/config/elasticsearch.yml
chown es:es /usr/local/es/elasticsearch-7.15.0-9202/config/elasticsearch.keystore
[root@c266a323c916 /]# chown -R es:es /usr/local/es/elasticsearch-7.15.0-9202/config/jvm.options.d
[root@c266a323c916 /]# chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9202/config/jvm.options.d
chmod 644 /usr/local/es/elasticsearch-7.15.0-9202/config/jvm.options
chmod 644 /usr/local/es/elasticsearch-7.15.0-9202/config/log4j2.properties
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9202/config/
chown -R es:es /usr/local/es/elasticsearch-7.15.0-9202/data
chmod -R 755 /usr/local/es/elasticsearch-7.15.0-9202/data
最后按顺序启动es
su - es
/usr/local/es/elasticsearch-7.15.0/bin/elasticsearch &
/usr/local/es/elasticsearch-7.15.0-9201/bin/elasticsearch &
/usr/local/es/elasticsearch-7.15.0-9202/bin/elasticsearch &
查看结果
[abc@localhost ~]$ curl -X GET "http://localhost:9200/_cat/nodes?v"
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.17.0.8 30 94 7 8.77 6.43 5.66 cdfhilmrstw * node-1
172.17.0.8 25 94 7 8.77 6.43 5.66 cdfhilrstw - node-2
172.17.0.8 29 94 7 8.77 6.43 5.66 cdfhilrstw - node-3
其他一些命令
查看全部索引
curl -X GET "localhost:9200/_cat/indices?v"
查询全文索引
show fulltext indexes
SHOW TEXT SEARCH CLIENTS;
查看所有索引
curl -X GET "http://localhost:9200/_cat/indices?v"
查看指定索引
curl -X GET "http://localhost:9200/fulltext_index_1"
图数据库新建索引
CREATE FULLTEXT TAG INDEX `fulltext_index_1` ON `人物`(`名字`) ANALYZER="standard"
图数据库中的命令重建索引
Rebuild fulltext index
删除索引
curl -X DELETE "localhost:9200/fulltext_index_2"
图数据库中删除全文索引
drop fulltext index fulltext_index_1
3. 注意事项
3.1 图数据库的启动顺序
1.启动图数据库
cd /opt/scripts/
./nebula.service start all
2.启动es集群
3.将es挂载到nebula-storaged上,用于创建全文索引,进行模糊查找
PS:一定要在nebula、es启动完成后操作
exit(退出es账户,用root账户)
cd /opt
./bin/nebula-storaged --flagfile etc/nebula-storaged-listener.conf
3.2 模糊查询失效
1.查看日志
cd /opt/logs
vim nebula-storaged.ERROR
vim nebula-graphd.ERROR
vim nebula-meta.ERROR
2.常见问题
1. access elasticsearch failed
2. shards not enough
3. 全文索引创建完成,但是模糊查询依旧失败
排查步骤:
1. 在nebula-studio中查询
show listener
PartId Type Host Host Status
1 ELASTICSEARCH "127.0.0.1":9789 ONLINE
2 ELASTICSEARCH "127.0.0.1":9789 ONLINE
若是结果中查询 Status为 Offline,则Es连接异常
2. 若ES正常,在nebula-studio中查询
show fulltext indexes
3.若查询全文索引存在,则执行
Rebuild fulltext index
若ES异常,一般问题为 ES的内存不够、Shards不够,解决办法如下
配置少副本,减少shards的使用量
curl -X PUT "http://127.0.0.1:9200/_settings" -H 'Content-Type: application/json' -d'
{
"index": {
"number_of_replicas": 0
}
}'
清理indexes
curl -X DELETE "localhost:9200/index1,index2,index3"
curl -X DELETE "localhost:9200/*"