当前位置: 首页 > article >正文

【数据仓库】StarRocks docker部署

StarRocks docker部署

    • 一、环境准备
      • 安装 docker 及 docker-compose
      • 操作系统相关禁用及配置【CentOS Linux 7 (Core)】
    • 二、StarRocks-v2.5【存算一体,3FE,3BE】
      • BE节点配置
      • FE节点配置
      • 服务启动
      • BE节点添加
      • FE 节点添加
    • 三、监控(待完善)
    • 四、VIP + Nginx + Keepalived(待完善)
    • 五、StarRocks-v3.2【存算分离,1FE,1CN】

一、环境准备

参考:https://docs.starrocks.io/zh/docs/2.5/deployment/environment_configurations/

安装 docker 及 docker-compose

docker

# 先卸载系统的旧版本
yum remove docker \
              docker-common \
              docker-selinux \
              docker-engine

# 设置仓库
yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装Docker
sudo yum install -y docker-ce docker-ce-cli containerd.io

# docker相关配置
cat > /etc/docker/daemon.json <<EOF
{ 
    "data-root": "/data/docker",
    "storage-driver": "overlay2",
    "exec-opts": ["native.cgroupdriver=systemd"],
    "live-restore": true,
    "registry-mirrors": [
        "https://docker.rainbond.cc",
        "https://docker.m.daocloud.io",
        "https://noohub.ru",
        "https://huecker.io",
        "https://dockerhub.timeweb.cloud",
        "https://3md2h0z0.mirror.aliyuncs.com",
        "https://registry.docker-cn.com",
        "http://hub-mirror.c.163.com",
        "https://mirror.ccs.tencentyun.com",
        "https://docker.mirrors.ustc.edu.cn",
        "http://f1361db2.m.daocloud.io"
    ],
    "log-opts": {"max-size":"500m", "max-file":"3"},
    "log-driver": "json-file"
}
EOF

# 启动Docker,设置开机自启动
systemctl start docker
systemctl enable docker

docker-compose

wget https://github.com/docker/compose/releases/download/v2.15.1/docker-compose-Linux-x86_64 -O /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version

操作系统相关禁用及配置【CentOS Linux 7 (Core)】

禁用防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service

禁用 SELinux

setenforce 0
sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
sed -i 's/SELINUXTYPE/#SELINUXTYPE/' /etc/selinux/config

内存设置

cat >> /etc/sysctl.conf << EOF
vm.overcommit_memory=1
EOF

sysctl -p

高并发配置

cat >> /etc/sysctl.conf << EOF
vm.max_map_count = 262144
EOF

sysctl -p

echo 120000 > /proc/sys/kernel/threads-max
echo 200000 > /proc/sys/kernel/pid_max

二、StarRocks-v2.5【存算一体,3FE,3BE】

参考:https://docs.starrocks.io/zh/docs/2.5/deployment/prepare_deployment_files/

各节点先准备好相关配置文件,以及拉取对应镜像(v2.5当前最后一个版本是 2.5.22 但没有docker官方镜像,所以用2.5.21)

  • BE: docker pull starrocks/be-ubuntu:2.5.21
  • FE: docker pull starrocks/fe-ubuntu:2.5.21

BE节点配置

目录

mkdir -pv /data/starrocks/be/conf
cd /data/starrocks/be

./docker-compose.yaml

cat > ./docker-compose.yaml  <<EOF
version: '3.7'
services:
  be:
    image: starrocks/be-ubuntu:2.5.21
    container_name: be
    restart: always
    network_mode: host
    command:
      /opt/starrocks/be/bin/start_be.sh
    volumes:
      - ./conf/be.conf:/opt/starrocks/be/conf/be.conf
      - ./storage:/opt/starrocks/be/storage
      - ./log:/opt/starrocks/be/log
      - /etc/localtime:/etc/localtime
    healthcheck:
      test: ["CMD-SHELL","curl -s -w '%{http_code}' -o /dev/null http://127.0.0.1:8040/api/health || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'"]
      interval: 30s
      timeout: 20s
      retries: 3
      start_period: 3m
    logging:
      driver: "json-file"
      options:
        tag: "{{.Name}}"
        max-size: "10m"
EOF

./conf/be.conf

  • priority_networks 改为宿主机网段

其余配置参考:https://docs.starrocks.io/zh/docs/2.5/administration/Configuration/#%E9%85%8D%E7%BD%AE-be-%E9%9D%99%E6%80%81%E5%8F%82%E6%95%B0

cat > ./conf/be.conf  <<EOF
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

# INFO, WARNING, ERROR, FATAL
sys_log_level = INFO
#JAVA_HOME=/usr/local/jdk
# ports for admin, web, heartbeat service
be_port = 9060
webserver_port = 8040
heartbeat_service_port = 9050
brpc_port = 8060

# Choose one if there are more than one ip except loopback address.
# Note that there should at most one ip match this list.
# If no ip match this rule, will choose one randomly.
# use CIDR format, e.g. 10.10.10.0/24
# Default value is empty.
# 以 CIDR 形式 10.10.10.0/24 指定 BE IP 地址,适用于机器有多个 IP,需要指定优先使用的网络。
priority_networks = 10.101.1.0/24

# data root path, separate by ';'
# you can specify the storage medium of each root path, HDD or SSD, seperate by ','
# eg:
# storage_root_path = /data1,medium:HDD;/data2,medium:SSD;/data3
# /data1, HDD;
# /data2, SSD;
# /data3, HDD(default);
#
# Default value is ${STARROCKS_HOME}/storage, you should create it by hand.
# storage_root_path = ${STARROCKS_HOME}/storage

# Advanced configurations
# sys_log_dir = ${STARROCKS_HOME}/log
# sys_log_roll_mode = SIZE-MB-1024
# sys_log_roll_num = 10
# sys_log_verbose_modules = *
# log_buffer_level = -1

default_rowset_type = beta
cumulative_compaction_num_threads_per_disk = 4
base_compaction_num_threads_per_disk = 2
cumulative_compaction_check_interval_seconds = 2
routine_load_thread_pool_size = 40
cumulative_compaction_budgeted_bytes=314572800
brpc_max_body_size = 8589934592
trash_file_expire_time_sec=600
mem_limit = 90%
pipeline_max_num_drivers_per_exec_thread=102400
disable_storage_page_cache = true
#disable_column_pool=true
#chunk_reserved_bytes_limit=100000000
EOF

./conf/log4j.properties

cat > ./conf/log4j.properties <<EOF
# log configuration for jars called via JNI in BE
# Because there are almost no other logs except jdbc bridge now, so it's enough to only output to stdout.
# If necessary, we can add special log files later
log4j.rootLogger=ERROR, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
EOF

./conf/hadoop_env.sh

cat > ./conf/hadoop_env.sh <<EOF
# This file is licensed under the Elastic License 2.0. Copyright 2021-present, StarRocks Inc.

export HADOOP_CLASSPATH=${STARROCKS_HOME}/lib/hadoop/common/*:${STARROCKS_HOME}/lib/hadoop/common/lib/*:${STARROCKS_HOME}/lib/hadoop/hdfs/*:${STARROCKS_HOME}/lib/hadoop/hdfs/lib/*
if [ -z "${HADOOP_USER_NAME}" ]
then
    if [ -z "${USER}" ]
    then
        export HADOOP_USER_NAME=$(id -u -n)
    else
        export HADOOP_USER_NAME=${USER}
    fi
fi

# the purpose is to use local hadoop configuration first.
# under HADOOP_CONF_DIR(eg. /etc/ecm/hadoop-conf), there are hadoop/hdfs/hbase conf files.
# and by putting HADOOP_CONF_DIR at front of HADOOP_CLASSPATH, local hadoop conf file will be searched & used first.

# local hadoop configuration is usually well-tailored and optimized, we'd better to leverage that.
# for example, if local hdfs has enabled short-circuit read, then we can use short-circuit read and save io time

if [ ${HADOOP_CONF_DIR}"X" != "X" ]; then
    export HADOOP_CLASSPATH=${HADOOP_CONF_DIR}:${HADOOP_CLASSPATH}
fi
EOF

./conf/core-site.xml

cat > ./conf/core-site.xml <<EOF
<configuration>
  <property>
      <name>fs.s3.impl</name>
      <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
   </property>
</configuration>
EOF

FE节点配置

目录

mkdir -pv /data/starrocks/fe/conf
cd /data/starrocks/fe

./docker-compose.yaml

cat > ./docker-compose.yaml <<EOF
version: '3.7'
services:
  fe:
    image: starrocks/fe-ubuntu:2.5.21
    container_name: fe
    restart: always
    network_mode: host
    command:
      /opt/starrocks/fe/bin/start_fe.sh
    volumes:
      - ./conf/fe.conf:/opt/starrocks/fe/conf/fe.conf
      - ./meta:/opt/starrocks/fe/meta
      - ./log:/opt/starrocks/fe/log
      - /etc/localtime:/etc/localtime
    healthcheck:
      test: ["CMD-SHELL","curl -s -w '%{http_code}' -o /dev/null http://127.0.0.1:8030/api/bootstrap || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'"]
      interval: 30s
      timeout: 20s
      retries: 3
      start_period: 3m
    logging:
      driver: "json-file"
      options:
        tag: "{{.Name}}"
        max-size: "10m"
EOF

./conf/fe.conf

  • priority_networks:改为宿主机网段
  • JAVA_OPTS_FOR_JDK_9:JVM内存根据宿主机合理调整,该配置宿主机总内存为32GB

其余配置参考:https://docs.starrocks.io/zh/docs/2.5/administration/Configuration/#%E9%85%8D%E7%BD%AE-fe-%E9%9D%99%E6%80%81%E5%8F%82%E6%95%B0

cat > ./conf/fe.conf <<EOF
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

#####################################################################
## The uppercase properties are read and exported by bin/start_fe.sh.
## To see all Frontend configurations,
## see fe/src/com/starrocks/common/Config.java

# the output dir of stderr and stdout
LOG_DIR = ${STARROCKS_HOME}/log
#JAVA_HOME=/usr/local/jdk
DATE = "$(date +%Y%m%d-%H%M%S)"
JAVA_OPTS="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseMembar -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xloggc:$STARROCKS_HOME/log/fe.gc.log.$DATE"

# For jdk 9+, this JAVA_OPTS will be used as default JVM options
#JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Duser.timezone=GMT+8 -Xmx8g -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:$STARROCKS_HOME/log/fe.gc.log.$DATE:time"
JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Duser.timezone=GMT+8 -Xmx8g -XX:+UseG1GC -Xlog:gc*:$STARROCKS_HOME/log/fe.gc.log.$DATE:time"

##
## the lowercase properties are read by main program.
##

# INFO, WARN, ERROR, FATAL
sys_log_level = INFO

# store metadata, create it if it is not exist.
# Default value is ${STARROCKS_HOME}/meta
# meta_dir = ${STARROCKS_HOME}/meta

http_port = 8030
rpc_port = 9020
query_port = 9030
edit_log_port = 9010
mysql_service_nio_enabled = true

# Choose one if there are more than one ip except loopback address.
# Note that there should at most one ip match this list.
# If no ip match this rule, will choose one randomly.
# use CIDR format, e.g. 10.10.10.0/24
# Default value is empty.
# priority_networks = 10.10.10.0/24;192.168.0.0/16
# 为那些有多个 IP 地址的服务器声明一个选择策略。
# 请注意,最多应该有一个 IP 地址与此列表匹配。这是一个以分号分隔格式的列表,用 CIDR 表示法,例如 10.10.10.0/24。 
# 如果没有匹配这条规则的ip,会随机选择一个。
priority_networks = 10.101.1.0/24

# Advanced configurations
# log_roll_size_mb = 1024
# sys_log_dir = ${STARROCKS_HOME}/log
# sys_log_roll_num = 10
# sys_log_verbose_modules =
# audit_log_dir = ${STARROCKS_HOME}/log
# audit_log_modules = slow_query, query
# audit_log_roll_num = 10
# meta_delay_toleration_second = 10
# qe_max_connection = 1024
# max_conn_per_user = 100
# qe_query_timeout_second = 300
# qe_slow_log_ms = 5000
max_create_table_timeout_second = 120
report_queue_size = 2048
max_routine_load_task_num_per_be = 40
enable_collect_query_detail_info = true
enable_udf = true
EOF

./conf/core-site.xml

cat > ./conf/core-site.xml <<EOF
<configuration>
  <property>
      <name>fs.s3.impl</name>
      <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
   </property>
</configuration>
EOF

./conf/hadoop_env.sh

cat > ./conf/hadoop_env.sh <<EOF
# This file is licensed under the Elastic License 2.0. Copyright 2021-present, StarRocks Inc.

export HADOOP_CLASSPATH=${STARROCKS_HOME}/lib/hadoop/common/*:${STARROCKS_HOME}/lib/hadoop/common/lib/*:${STARROCKS_HOME}/lib/hadoop/hdfs/*:${STARROCKS_HOME}/lib/hadoop/hdfs/lib/*
if [ -z "${HADOOP_USER_NAME}" ]
then
    if [ -z "${USER}" ]
    then
        export HADOOP_USER_NAME=$(id -u -n)
    else
        export HADOOP_USER_NAME=${USER}
    fi
fi

# the purpose is to use local hadoop configuration first.
# under HADOOP_CONF_DIR(eg. /etc/ecm/hadoop-conf), there are hadoop/hdfs/hbase conf files.
# and by putting HADOOP_CONF_DIR at front of HADOOP_CLASSPATH, local hadoop conf file will be searched & used first.

# local hadoop configuration is usually well-tailored and optimized, we'd better to leverage that.
# for example, if local hdfs has enabled short-circuit read, then we can use short-circuit read and save io time

if [ ${HADOOP_CONF_DIR}"X" != "X" ]; then
    export HADOOP_CLASSPATH=${HADOOP_CONF_DIR}:${HADOOP_CLASSPATH}
fi
EOF

服务启动

参考:https://docs.starrocks.io/zh/docs/2.5/deployment/deploy_manually/

启动 Leader FE 节点

# 进入某个fe服务器,哪个都可以,一个就行
cd /data/starrocks/fe
docker-compose up -d
# 检查 FE 节点是否启动成功
docker ps
cat ./log/fe.log | grep thrift

启动 BE 节点

在一个 StarRocks 集群中部署并添加至少 3 个 BE 节点后,这些节点将自动形成一个 BE 高可用集群。

# 3个BE节点都启动
cd /data/starrocks/be
docker-compose up -d
# 检查 BE 节点是否启动成功
docker ps
cat ./log/be.INFO | grep heartbeat

BE节点添加

进入 Leader FE

docker exec -it fe mysql -h 127.0.0.1 -P9030 -uroot

查看节点状态

SHOW PROC '/frontends'\G
  • 如果字段 Alive 为 true,说明该 FE 节点正常启动并加入集群。
  • 如果字段 Role 为 FOLLOWER,说明该 FE 节点有资格被选为 Leader FE 节点。
  • 如果字段 Role 为 LEADER,说明该 FE 节点为 Leader FE 节点。

将BE节点添加至集群

-- 将 <be_address> 替换为 BE 节点的 IP 地址(priority_networks)或 FQDN,
-- 并将 <heartbeat_service_port>(默认:9050)替换为您在 be.conf 中指定的 heartbeat_service_port。
ALTER SYSTEM ADD BACKEND "<be_address>:<heartbeat_service_port>", "<be2_address>:<heartbeat_service_port>", "<be3_address>:<heartbeat_service_port>";
-- 查看 BE 节点状态
SHOW PROC '/backends'\G
  • 如果字段 Alive 为 true,说明该 BE 节点正常启动并加入集群

FE 节点添加

进入 Leader FE 节点以外的另外两个服务器

向集群中添加新的 Follower FE 节点时,您必须在首次启动新 FE 节点时为其分配一个 helper 节点(本质上是一个现有的 Follower FE 节点)以同步所有 FE 元数据信息。

cd /data/starrocks/fe
# 先不用docker-copmose方式启动
docker run --rm  \
--network host  \
--privileged=true -it \
-v /data/starrocks/fe/log:/opt/starrocks/fe/log \
-v /data/starrocks/fe/meta:/opt/starrocks/fe/meta \
-v /data/starrocks/fe/conf:/opt/starrocks/fe/conf \
starrocks/fe-ubuntu:2.5.21  bash

# 进入了容器
# 这里ip填主节点的  SHOW PROC '/frontends'\G
/opt/starrocks/fe/bin/start_fe.sh --helper 10.101.1.1:9010 --daemon

# 查看 FE 日志,检查 FE 节点是否启动成功。先执行下面加入节点再回来查
cat fe/log/fe.log | grep thrift

Leader FE 节点

# 【Leader FE】进入集群,执行加入该ip的节点
ALTER SYSTEM ADD FOLLOWER "10.101.1.2:9010";
# 【Leader FE】集群删除节点
# ALTER SYSTEM DROP follower "10.101.1.2:9010";

# 【Leader FE】查看节点状态,字段 Join,Alive 是否为 true
SHOW PROC '/frontends'\G

Follower FE 节点

# 从节点退出容器,使用 docker-compose 启动,其他节点也同样如此操作加入集群
docker-compose up -d

三、监控(待完善)

四、VIP + Nginx + Keepalived(待完善)


keepalived.service

systemctl status keepalived.service

/etc/keepalived/keepalived.conf

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
     bnd@bndxqc.com.cn
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL_1
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_nginx.sh"
    interval 5
    weight   -15
}
vrrp_instance VI_NGINX {
    state MASTER
    interface ens192
    virtual_router_id 157
    mcast_src_ip 10.101.1.1
    priority 120
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass aaa123456
    }
    virtual_ipaddress {
        10.101.1.7/24
    }
    track_script {
        chk_nginx
    }
}

五、StarRocks-v3.2【存算分离,1FE,1CN】

存算分离sr3.2

mkdir -p /data/starrocks
cd /data/starrocks
cat > docker-compose.yml <<- 'EOF'
version: "3"
services:
  minio:
    container_name: starrocks-minio
    image: minio/minio:latest
    environment:
      MINIO_ROOT_USER: miniouser
      MINIO_ROOT_PASSWORD: miniopassword
    volumes:
      - ./minio/data:/minio_data
    ports:
      - "9001:9001"
      - "9000:9000"
    entrypoint: sh
    command: '-c ''mkdir -p /minio_data/starrocks && minio server /minio_data --console-address ":9001"'''
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 5s
      timeout: 5s
      retries: 5
    networks:
      network:
        ipv4_address: 10.5.0.6

  minio_mc:
    # This service is short lived, it does this:
    # - starts up
    # - checks to see if the MinIO service `minio` is ready
    # - creates a MinIO Access Key that the StarRocks services will use
    # - exits
    image: minio/mc:latest
    entrypoint:
      - sh
      - -c
      - |
      until mc ls minio > /dev/null 2>&1; do
        sleep 0.5
      done

      # 设置别名
      mc alias set myminio http://minio:9000 miniouser miniopassword
      
      # 创建服务账号
      mc admin user svcacct add --access-key AAAAAAAAAAAAAAAAAAAA \
      --secret-key BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB \
      myminio \
      miniouser
      
      # 更新服务账号,设置过期时间为1年后(此处可根据实际需要调整)
      EXPIRE_DATE=$(date -Iseconds -d '+1 year')
      mc admin user svcacct update myminio --access-key=AAAAAAAAAAAAAAAAAAAA --expire="${EXPIRE_DATE}Z"
    depends_on:
      - minio
    networks:
      network:
        ipv4_address: 10.5.0.7
      
  starrocks-fe:
    image: starrocks/fe-ubuntu:3.1-latest
    hostname: starrocks-fe
    container_name: starrocks-fe
    user: root
    volumes: 
      - ./starrocks/fe/meta:/opt/starrocks/fe/meta
      - ./starrocks/fe/log:/opt/starrocks/fe/log
    command: >
      bash -c "echo run_mode=shared_data >> /opt/starrocks/fe/conf/fe.conf &&
      echo aws_s3_path=starrocks >> /opt/starrocks/fe/conf/fe.conf &&
      echo aws_s3_endpoint=minio:9000 >> /opt/starrocks/fe/conf/fe.conf &&
      echo aws_s3_access_key=AAAAAAAAAAAAAAAAAAAA >> /opt/starrocks/fe/conf/fe.conf
      echo aws_s3_secret_key=BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB >> /opt/starrocks/fe/conf/fe.conf
      echo aws_s3_use_instance_profile=false >> /opt/starrocks/fe/conf/fe.conf &&
      echo cloud_native_storage_type=S3 >> /opt/starrocks/fe/conf/fe.conf &&
      echo aws_s3_use_aws_sdk_default_behavior=true >> /opt/starrocks/fe/conf/fe.conf &&
      bash /opt/starrocks/fe/bin/start_fe.sh"
    ports:
      - 8030:8030
      - 9020:9020
      - 9030:9030
    healthcheck:
      test: 'mysql -uroot -h10.5.0.2 -P 9030 -e "show frontends\G" |grep "Alive: true"'
      interval: 10s
      timeout: 5s
      retries: 3
    depends_on:
      - minio
    networks:
      network:
        ipv4_address: 10.5.0.2

  starrocks-cn:
    image: starrocks/cn-ubuntu:3.1-latest
    command:
      - /bin/bash
      - -c
      - |
        sleep 15s;
        mysql --connect-timeout 2 -h starrocks-fe -P9030 -uroot -e "ALTER SYSTEM ADD COMPUTE NODE \"starrocks-cn:9050\";"
        /opt/starrocks/cn/bin/start_cn.sh
    ports:
      - 8040:8040
    hostname: starrocks-cn
    container_name: starrocks-cn
    user: root
    volumes:
      - ./starrocks/cn/storage:/opt/starrocks/cn/storage
      - ./starrocks/cn/log:/opt/starrocks/cn/log
    depends_on:
      - starrocks-fe
      - minio
    healthcheck:
      test: 'mysql -uroot -h10.5.0.2 -P 9030 -e "SHOW COMPUTE NODES\G" |grep "Alive: true"'
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      network:
        ipv4_address: 10.5.0.3
networks:
  network:
    driver: bridge
    ipam:
      config:
        - subnet: 10.5.0.0/16
          gateway: 10.5.0.1
EOF


docker-compose up -d
docker ps

http://www.kler.cn/a/551702.html

相关文章:

  • 快速排序_912. 排序数组(10中排序算法)
  • Axios 如何处理并发请求
  • leetcode 2435. 矩阵中和能被 K 整除的路径
  • Windows系统编程(六)内存操作与InlineHook
  • 国自然青年基金|针对罕见神经上皮肿瘤的小样本影像深度数据挖掘关键技术研究|基金申请·25-02-15
  • 电磁兼容(EMC):整改案例(十二)调整PCB叠层设计解决静电问题
  • [STM32 - 野火] - - - 固件库学习笔记 - - - 十五.设置FLASH的读写保护及解除
  • WebRTC嵌入式视频通话SDK:EasyRTC从免插件到轻量级带来的音视频通话技术
  • AI工具篇:利用DeepSeek+Kimi 辅助生成综述汇报PPT
  • 无人机+低轨卫星:无限距离集群网络技术详解
  • JAVA系列之数组的秘密(数组的一般用法+力扣 斯坦福大学练习精解)
  • Ansible自动化运维中剧本角色(roles)来完成apache服务操作
  • 如何在本地和线上安装和配置RabbitMQ
  • 交换机三层转发原理(涵盖ARP,ICMP,IP协议)
  • 探秘 Python 枚举类型:从基础到实战的深度指南
  • pyqt写一个待办程序
  • IIS asp.net权限不足
  • Android Studio:用handler实现计数
  • 基于遗传算法排课系统
  • Flutter 3.29.0 新特性 CupertinoNavigationBar 可配置bottom属性