当前位置: 首页 > article >正文

ClickHouse 5节点集群安装

ClickHouse 5节点集群安装

在此架构中,配置了五台服务器。其中两个用于托管数据副本。其他三台服务器用于协调数据的复制。在此示例中,我们将创建一个数据库和表,将使用 ReplicatedMergeTree 表引擎在两个数据节点之间复制该数据库和表。

官方文档:https://clickhouse.com/docs/en/architecture/replication

部署环境

在这里插入图片描述

节点清单:

主机名节点IP操作系统节点配置描述
clickhouse-01192.168.72.51Ubuntu22.042C/4G/100G DISKClickhose server, client
clickhouse-02192.168.72.52Ubuntu22.042C/4G/100G DISKClickhose server, client
clickhouse-keeper-01192.168.72.53Ubuntu22.042C/4G/100G DISKClikhouse keeper
clickhouse-keeper-02192.168.72.54Ubuntu22.042C/4G/100G DISKClikhouse keeper
clickhouse-keeper-03192.168.72.55Ubuntu22.042C/4G/100G DISKClikhouse keeper

说明:

在生产环境中,我们强烈建议为 ClickHouse keeper 使用专用主机。在测试环境中,可以在同一服务器上组合运行 ClickHouse Server 和 ClickHouse Keeper。另一个基本示例“横向扩展”就使用了这种方法。在此示例中,我们介绍了将 Keeper 与 ClickHouse Server 分离的推荐方法。 Keeper 服务器可以更小,4GB RAM 通常足以用于每个 Keeper 服务器,直到您的 ClickHouse 服务器变得非常大。

在所有节点上配置主机名

hostnamectl set-hostname clickhouse-01
hostnamectl set-hostname clickhouse-02
hostnamectl set-hostname clickhouse-keeper-01
hostnamectl set-hostname clickhouse-keeper-02
hostnamectl set-hostname clickhouse-keeper-03

在所有节点上编辑 /etc/hosts 文件

cat >/etc/hosts<<EOF
192.168.72.51 clickhouse-01 clickhouse-01.example.com
192.168.72.52 clickhouse-02 clickhouse-02.example.com
192.168.72.53 clickhouse-keeper-01 clickhouse-keeper-01.example.com
192.168.72.54 clickhouse-keeper-02 clickhouse-keeper-02.example.com
192.168.72.55 clickhouse-keeper-03 clickhouse-keeper-03.example.com
EOF

安装clickhouse

clickhouse-01和clickhouse-02节点执行

在clickhouse-01和clickhouse-02节点上需要安装ClickHouse-server及client

sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client

clickhouse-keeper-01~03节点执行

在clickhouse-keeper-01~03节点上仅安装clickhose-keeper

sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-keeper

创建clickhose-keeper相关目录

mkdir -p /etc/clickhouse-keeper/config.d
mkdir -p /var/log/clickhouse-keeper
mkdir -p /var/lib/clickhouse-keeper/coordination/log
mkdir -p /var/lib/clickhouse-keeper/coordination/snapshots
mkdir -p /var/lib/clickhouse-keeper/cores
chown -R clickhouse.clickhouse /etc/clickhouse-keeper /var/log/clickhouse-keeper /var/lib/clickhouse-keeper

clickhouse-01配置

对于 clickhouse-01 有五个配置文件。您可以选择将这些文件合并为一个文件,但为了文档的清晰性,单独查看它们可能会更简单。当您通读配置文件时,您会发现 clickhouse-01 和 clickhouse-02 之间的大部分配置是相同的;差异将被突出显示。

网络和日志记录配置

这些值可以根据您的意愿进行定制。此示例配置为您提供:

  • 调试日志将以 1000M 滚动 3 次
  • 使用clickhouse-client连接时显示的名称是cluster_1S_2R node 1
  • ClickHouse 将侦听 IPV4 网络的端口 8123 和 9000。

clickhouse-01 上的 /etc/clickhouse-server/config.d/network-and-logging.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/network-and-logging.xml
<clickhouse>
    <logger>
        <level>debug</level>
        <log>/var/log/clickhouse-server/clickhouse-server.log</log>
        <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <display_name>cluster_1S_2R node 1</display_name>
    <listen_host>0.0.0.0</listen_host>
    <http_port>8123</http_port>
    <tcp_port>9000</tcp_port>
</clickhouse>

宏配置

shardreplica降低了分布式 DDL 的复杂性。配置的值会自动替换到您的 DDL 查询中,从而简化您的 DDL。此配置的宏指定每个节点的分片和副本数量。
在此 1 分片 2 副本示例中,副本宏是 clickhouse-01 上的replica_1和 clickhouse-02 上的replica_2 。 clickhouse-01 和 clickhouse-02 上的分片宏均为1因为只有一个分片。

clickhouse-01 上的 /etc/clickhouse-server/config.d/macros.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/macros.xml
<clickhouse>
    <macros>
        <shard>01</shard>
        <replica>01</replica>
        <cluster>cluster_1S_2R</cluster>
    </macros>
</clickhouse>

复制和分片配置

从顶部开始:

  • XML 的remote_servers 部分指定环境中的每个集群。属性replace=true将默认ClickHouse配置中的示例remote_servers替换为此文件中指定的remote_server配置。如果没有此属性,此文件中的远程服务器将被附加到默认的示例列表中。
  • 在此示例中,有一个名为cluster_1S_2R的集群。
  • 为名为cluster_1S_2R的集群创建一个机密,其值为mysecretphrase 。该秘密在环境中的所有远程服务器之间共享,以确保正确的服务器连接在一起。
  • 集群cluster_1S_2R有 1 个分片和 2 个副本。查看本文档开头的架构图,并将其与下面 XML 中的shard定义进行比较。分片定义包含两个副本。指定每个副本的主机和端口。一个副本存储在clickhouse-01上,另一个副本存储在clickhouse-02上。
  • 分片的内部复制设置为 true。每个分片都可以在配置文件中定义internal_replication 参数。如果该参数设置为true,则写操作会选择第一个健康的副本并向其写入数据。

clickhouse-01 上的 /etc/clickhouse-server/config.d/remote-servers.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/remote-servers.xml
<clickhouse>
    <remote_servers replace="true">
        <cluster_1S_2R>
            <secret>mysecretphrase</secret>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>clickhouse-01</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>clickhouse-02</host>
                    <port>9000</port>
                </replica>
            </shard>
        </cluster_1S_2R>
    </remote_servers>
</clickhouse>

配置Keeper的使用

此配置文件use-keeper.xml将 ClickHouse Server 配置为使用 ClickHouse Keeper 来协调复制和分布式 DDL。此文件指定 ClickHouse Server 应在端口 9181 上的节点 clickhouse-keeper-01 - 03 上使用 Keeper,并且该文件在clickhouse-01clickhouse-02上相同。

clickhouse-01 上的 /etc/clickhouse-server/config.d/use-keeper.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/use-keeper.xml
<clickhouse>
    <zookeeper>
        <!-- where are the ZK nodes -->
        <node>
            <host>clickhouse-keeper-01</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-02</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-03</host>
            <port>9181</port>
        </node>
    </zookeeper>
</clickhouse>

clickhouse-02配置

由于 clickhouse-01 和 clickhouse-02 上的配置非常相似,这里仅指出差异。

网络和日志记录配置

该文件在 clickhouse-01 和 clickhouse-02 上都是相同的,但display_name除外。

clickhouse-02 上的 /etc/clickhouse-server/config.d/network-and-logging.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/network-and-logging.xml
<clickhouse>
    <logger>
        <level>debug</level>
        <log>/var/log/clickhouse-server/clickhouse-server.log</log>
        <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <display_name>cluster_1S_2R node 2</display_name>
    <listen_host>0.0.0.0</listen_host>
    <http_port>8123</http_port>
    <tcp_port>9000</tcp_port>
</clickhouse>

宏配置

clickhouse-01 和 clickhouse-02 之间的宏配置不同。 replica在此节点上设置为02

clickhouse-02 上的 /etc/clickhouse-server/config.d/macros.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/macros.xml
<clickhouse>
    <macros>
        <shard>01</shard>
        <replica>02</replica>
        <cluster>cluster_1S_2R</cluster>
    </macros>
</clickhouse>

复制和分片配置

该文件在 clickhouse-01 和 clickhouse-02 上是相同的。

clickhouse-02 上的 /etc/clickhouse-server/config.d/remote-servers.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/remote-servers.xml
<clickhouse>
    <remote_servers replace="true">
        <cluster_1S_2R>
            <secret>mysecretphrase</secret>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>clickhouse-01</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>clickhouse-02</host>
                    <port>9000</port>
                </replica>
            </shard>
        </cluster_1S_2R>
    </remote_servers>
</clickhouse>

配置Keeper的使用

该文件在 clickhouse-01 和 clickhouse-02 上是相同的。

clickhouse-02 上的 /etc/clickhouse-server/config.d/use-keeper.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/use-keeper.xml
<clickhouse>
    <zookeeper>
        <!-- where are the ZK nodes -->
        <node>
            <host>clickhouse-keeper-01</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-02</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-03</host>
            <port>9181</port>
        </node>
    </zookeeper>
</clickhouse>

clickhouse-keeper-01配置

最佳实践

通过编辑配置文件来配置 ClickHouse Keeper 时,您应该:

  • 备份 /etc/clickhouse-keeper/keeper_config.xml
  • 编辑 /etc/clickhouse-keeper/keeper_config.xml 文件

ClickHouse Keeper 提供数据复制和分布式 DDL 查询执行的协调系统。 ClickHouse Keeper 与 Apache ZooKeeper 兼容。此配置在端口 9181 上启用 ClickHouse Keeper。突出显示的行指定该 Keeper 实例的 server_id 为 1。这是三台服务器的enable-keeper.xml文件中的唯一区别。 clickhouse-keeper-02server_id设置为2clickhouse-keeper-03server_id设置为3 。 raft 配置部分在所有三台服务器上都是相同的,下面突出显示以向您展示 raft 配置中server_idserver实例之间的关系。

说明

如果出于任何原因更换或重建 Keeper 节点,请勿重复使用现有的server_id 。例如,如果重建了server_id2的Keeper节点,则将其server_id设置为4或更高。

备份所有节点keeper_config.xml配置

# 备份配置
cp /etc/clickhouse-keeper/keeper_config.xml{,.bak}
# 清空默认配置
echo > /etc/clickhouse-keeper/keeper_config.xml

clickhouse-keeper-01 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-01:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse>
    <logger>
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <listen_host>0.0.0.0</listen_host>
    <keeper_server>
        <tcp_port>9181</tcp_port>
        <server_id>1</server_id>
        <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
        <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
        <coordination_settings>
            <operation_timeout_ms>10000</operation_timeout_ms>
            <session_timeout_ms>30000</session_timeout_ms>
            <raft_logs_level>trace</raft_logs_level>
        </coordination_settings>
        <raft_configuration>
            <server>
                <id>1</id>
                <hostname>clickhouse-keeper-01</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>2</id>
                <hostname>clickhouse-keeper-02</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>3</id>
                <hostname>clickhouse-keeper-03</hostname>
                <port>9234</port>
            </server>
        </raft_configuration>
    </keeper_server>
</clickhouse>

clickhouse-keeper-02配置

clickhouse-keeper-01clickhouse-keeper-02之间只有一行差异。该节点上的server_id设置为2

clickhouse-keeper-02 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-02:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse>
    <logger>
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <listen_host>0.0.0.0</listen_host>
    <keeper_server>
        <tcp_port>9181</tcp_port>
        <server_id>2</server_id>
        <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
        <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
        <coordination_settings>
            <operation_timeout_ms>10000</operation_timeout_ms>
            <session_timeout_ms>30000</session_timeout_ms>
            <raft_logs_level>trace</raft_logs_level>
        </coordination_settings>
        <raft_configuration>
            <server>
                <id>1</id>
                <hostname>clickhouse-keeper-01</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>2</id>
                <hostname>clickhouse-keeper-02</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>3</id>
                <hostname>clickhouse-keeper-03</hostname>
                <port>9234</port>
            </server>
        </raft_configuration>
    </keeper_server>
</clickhouse>

clickhouse-keeper-03配置

clickhouse-keeper-01clickhouse-keeper-03之间只有一行差异。该节点上的server_id设置为3

clickhouse-keeper-03 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-03:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse>
    <logger>
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <listen_host>0.0.0.0</listen_host>
    <keeper_server>
        <tcp_port>9181</tcp_port>
        <server_id>3</server_id>
        <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
        <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
        <coordination_settings>
            <operation_timeout_ms>10000</operation_timeout_ms>
            <session_timeout_ms>30000</session_timeout_ms>
            <raft_logs_level>trace</raft_logs_level>
        </coordination_settings>
        <raft_configuration>
            <server>
                <id>1</id>
                <hostname>clickhouse-keeper-01</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>2</id>
                <hostname>clickhouse-keeper-02</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>3</id>
                <hostname>clickhouse-keeper-03</hostname>
                <port>9234</port>
            </server>
        </raft_configuration>
    </keeper_server>
</clickhouse>

启动服务

clickhouse-keeper-01~03节点执行;

启动服务

systemctl enable --now clickhouse-keeper.service

确认clickhouse-keeper-01服务运行状态

root@clickhouse-keeper-01:~# systemctl status clickhouse-keeper.service 
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination server
     Loaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:44:26 CST; 3h 0min ago
   Main PID: 3460 (clickhouse-keep)
      Tasks: 41 (limit: 4556)
     Memory: 58.8M
        CPU: 1min 13.000s
     CGroup: /system.slice/clickhouse-keeper.service
             └─3460 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid

Oct 27 19:44:26 clickhouse-keeper-01 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

确认clickhouse-keeper-02服务运行状态

root@clickhouse-keeper-02:~# systemctl status clickhouse-keeper.service 
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination server
     Loaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:44:28 CST; 3h 0min ago
   Main PID: 3053 (clickhouse-keep)
      Tasks: 41 (limit: 4556)
     Memory: 44.7M
        CPU: 1min 557ms
     CGroup: /system.slice/clickhouse-keeper.service
             └─3053 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid

Oct 27 19:44:28 clickhouse-keeper-02 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

确认clickhouse-keeper-03服务运行状态

root@clickhouse-keeper-03:~# systemctl status clickhouse-keeper.service
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination server
     Loaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:44:30 CST; 3h 0min ago
   Main PID: 2991 (clickhouse-keep)
      Tasks: 41 (limit: 4556)
     Memory: 43.4M
        CPU: 1min 336ms
     CGroup: /system.slice/clickhouse-keeper.service
             └─2991 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid

Oct 27 19:44:30 clickhouse-keeper-03 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

clickhouse-01~02 节点执行;

systemctl enable --now clickhouse-server.service
systemctl restart clickhouse-server.service

确认clickhouse-01服务运行状态

root@clickhouse-01:~# systemctl status clickhouse-server.service 
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
     Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:55:27 CST; 2h 51min ago
   Main PID: 3107 (clickhouse-serv)
      Tasks: 701 (limit: 4556)
     Memory: 802.6M
        CPU: 25min 4.495s
     CGroup: /system.slice/clickhouse-server.service
             ├─3104 clickhouse-watchdog "" "" "" "" "" "" "" --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
             └─3107 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid

Oct 27 19:55:26 clickhouse-01 systemd[1]: Starting ClickHouse Server (analytic DBMS for big data)...
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Processing configuration file '/etc/clickhouse-server/config.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/macros.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/network-and-logging.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/remote-servers.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/use-keeper.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging debug to /var/log/clickhouse-server/clickhouse-server.log
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Oct 27 19:55:26 clickhouse-01 systemd[1]: clickhouse-server.service: Supervising process 3107 which is not our child. We'll most likely not notice when it exits.
Oct 27 19:55:27 clickhouse-01 systemd[1]: Started ClickHouse Server (analytic DBMS for big data).
root@clickhouse-01:~# 

确认clickhouse-02服务运行状态

root@clickhouse-01:~# systemctl status clickhouse-server.service 
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
     Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:55:27 CST; 2h 51min ago
   Main PID: 3107 (clickhouse-serv)
      Tasks: 701 (limit: 4556)
     Memory: 759.0M
        CPU: 25min 6.801s
     CGroup: /system.slice/clickhouse-server.service
             ├─3104 clickhouse-watchdog "" "" "" "" "" "" "" --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
             └─3107 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid

Oct 27 19:55:26 clickhouse-01 systemd[1]: Starting ClickHouse Server (analytic DBMS for big data)...
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Processing configuration file '/etc/clickhouse-server/config.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/macros.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/network-and-logging.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/remote-servers.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/use-keeper.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging debug to /var/log/clickhouse-server/clickhouse-server.log
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Oct 27 19:55:26 clickhouse-01 systemd[1]: clickhouse-server.service: Supervising process 3107 which is not our child. We'll most likely not notice when it exits.
Oct 27 19:55:27 clickhouse-01 systemd[1]: Started ClickHouse Server (analytic DBMS for big data).
root@clickhouse-01:~# 

测试集群

要获得 ReplicatedMergeTree 和 ClickHouse Keeper 的经验,您可以运行以下命令:

  • 在上面配置的集群上创建数据库
  • 使用 ReplicatedMergeTree 表引擎在数据库上创建表
  • 在一个节点上插入数据,在另一个节点上查询
  • 停止一个ClickHouse服务器节点
  • 在运行节点上插入更多数据
  • 重新启动停止的节点
  • 查询重启节点时验证数据是否可用

验证 ClickHouse Keeper 是否正在运行

mntr命令用于验证 ClickHouse Keeper 是否正在运行并获取有关三个 Keeper 节点关系的状态信息。在此示例中使用的配置中,三个节点一起工作。节点将选举领导者,其余节点将成为追随者。 mntr命令提供与性能以及特定节点是跟随者还是领导者相关的信息。

提示

您可能需要安装netcat才能将mntr命令发送到 Keeper。请参阅nmap.org页面以获取下载信息。

从 clickhouse-keeper-01、clickhouse-keeper-02 和 clickhouse-keeper-03 上的 shell 运行

echo mntr | nc localhost 9181

来自关注者的回应

zk_version  v23.3.1.2823-testing-46e85357ce2da2a99f56ee83a079e892d7ec3726
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 0
zk_packets_sent 0
zk_num_alive_connections    0
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  6
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size    1271
zk_key_arena_size   4096
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   46
zk_max_file_descriptor_count    18446744073709551615

leader的回应

zk_version  v23.3.1.2823-testing-46e85357ce2da2a99f56ee83a079e892d7ec3726
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 0
zk_packets_sent 0
zk_num_alive_connections    0
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count  6
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size    1271
zk_key_arena_size   4096
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   48
zk_max_file_descriptor_count    18446744073709551615
zk_followers    2
zk_synced_followers 2

验证 ClickHouse 集群功能

在一个 shell 中使用clickhouse client连接到节点clickhouse-01 ,并在另一个 shell 中使用clickhouse client端连接到节点clickhouse-02

1、在上面配置的集群上创建数据库

在节点 clickhouse-01 或 clickhouse-02 上运行

CREATE DATABASE db1 ON CLUSTER cluster_1S_2R
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ clickhouse-0290000 │       │                   10 │
│ clickhouse-0190000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2、使用 ReplicatedMergeTree 表引擎在数据库上创建表

在节点 clickhouse-01 或 clickhouse-02 上运行

CREATE TABLE db1.table1 ON CLUSTER cluster_1S_2R
(
    `id` UInt64,
    `column1` String
)
ENGINE = ReplicatedMergeTree
ORDER BY id
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ clickhouse-0290000 │       │                   10 │
│ clickhouse-0190000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

3、在一个节点上插入数据,在另一个节点上查询

在节点 clickhouse-01 上运行

INSERT INTO db1.table1 (id, column1) VALUES (1, 'abc');

4、查询节点clickhouse-02上的表

在节点 clickhouse-02 上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘

5、在另一个节点上插入数据,并在节点clickhouse-01上查询

在节点 clickhouse-02 上运行

INSERT INTO db1.table1 (id, column1) VALUES (2, 'def');

在节点 clickhouse-01 上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘

6、停止一个 ClickHouse 服务器节点 通过运行类似于启动该节点的命令的操作系统命令来停止其中一个 ClickHouse 服务器节点。如果您使用systemctl start启动节点,则使用systemctl stop停止它。

root@clickhouse-01:~# systemctl stop clickhouse-server.service

7、在运行节点上插入更多数据

在正在运行的节点上运行

INSERT INTO db1.table1 (id, column1) VALUES (3, 'ghi');

选择数据:

在正在运行的节点上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘
┌─id─┬─column1─┐
│  3 │ ghi     │
└────┴─────────┘

8、重新启动停止的节点并从那里选择

root@clickhouse-01:~# systemctl start clickhouse-server.service

在重启的节点上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘
┌─id─┬─column1─┐
│  3 │ ghi     │
└────┴─────────┘

http://www.kler.cn/a/374354.html

相关文章:

  • OpenCV视觉分析之目标跟踪(2)卡尔曼滤波器KalmanFilter的使用
  • docker-Dockerfile
  • 微信小程序服务通知
  • 瑞格智慧心理服务平台 NPreenSMSList.asmx sql注入漏洞复现
  • 【论文复现】短期电力负荷
  • Canvas简历编辑器-选中绘制与拖拽多选交互设计
  • node.js_npm : 无法加载文件 D:\Program Files\nodejs\npm.ps1
  • 深入了解 Three.js 中的材质与光照
  • 信捷 PLC C语言 简易绝对运动函数BMC_A_DRVA_BODY在POU FC中的使用
  • C++初阶教程——类与对象(中篇)
  • 2024年NSSCTF秋季招新赛-WEB
  • 算法笔记:Day-06(矩阵的顺时针遍历,特定顺序遍历,二维数组的变换)
  • 身份证人像照片验真 API 对接说明
  • Unity发布微信小程序-实战问题汇总
  • 数智驱动,纷享销客助力万东医疗实现精细化管理
  • ZYNQ AXI_Timer 中断
  • 从0学习React(9)
  • 100种算法【Python版】第32篇——Lucas-Lehmer测试
  • 【gRPC】什么是RPC——介绍一下RPC
  • HBM MM CDM HMM ESD TVS 浪涌
  • 【代码随想录Day54】图论Part06
  • 鸿蒙OS:中国智造的基石
  • w012基于springboot的社区团购系统设计
  • hadoop_yarn-site.xml
  • Chrome和夸克谁更护眼
  • OpenCV视觉分析之目标跟踪(2)卡尔曼滤波器KalmanFilter的使用