当前位置: 首页 > article >正文

数据库备份、主从、集群等配置

数据库备份、主从、集群等配置

  • 1 MySQL
    • 1.1 docker安装MySQL
    • 1.2 主从复制
      • 1.2.1 主节点配置
      • 1.2.2 从节点配置
      • 1.2.3 创建用于主从同步的用户
      • 1.2.4 开启主从同步
      • 1.2.4 主从同步验证
    • 1.3 主从切换
      • 1.3.1 主节点设置只读(在192.168.1.151上操作)
      • 1.3.2 检查主从数据是否同步完毕(在192.168.1.152操作)
      • 1.3.3 停止并重置从节点(在192.168.1.152操作)
      • 1.3.4 修改原从节点的只读配置(在192.168.1.152操作)
      • 1.3.5 主从切换
      • 1.3.6 验证
  • 2 Redis
    • 2.1 Redis主从复制
    • 2.2 Redis哨兵机制
    • 2.3 Redis集群
    • 2.4 主从复制、哨兵sentinel、集群的区别
  • 3 MongoDB
    • 3.1 MongoDB主从复制集群(不推荐)
    • 3.2 MongoDB副本集(Replica Set)集群
      • 3.2.1 集群搭建
      • 3.2.2 测试
      • 3.2.3 Navicat Premium连接mongoDB副本集群
      • 3.2.4 Golang代码连接mongoDB副本集群
    • 3.3 MongoDB分片集群(暂未搭建成功)

1 MySQL

服务器配置:

操作系统类型IPMySQL版本主从类型
7.9.2009192.168.1.1518.0.21
7.9.2009192.168.1.1528.0.21

1.1 docker安装MySQL

在192.168.1.151、192.168.1.152安装MySQL。

  1. 创建挂载目录
mkdir -p /opt/soft/mysql/{conf,data,log}
  1. 拉取镜像
docker pull mysql:8.0.21
  1. docker-compose.yaml(主节点的容器名称:dc_mysql_master、dc_mysql_slave)
version: '3'
services:
  mysql:
    image: mysql:8.0.21
    container_name: dc_mysql_master
    restart: always
    environment:
      TZ: Asia/Shanghai
      MYSQL_ROOT_PASSWORD: 123456
    ports:
       - 4306:3306
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/mysql/data/:/var/lib/mysql/
      - /opt/soft/mysql/conf/my.cnf:/etc/mysql/my.cnf
      - /opt/soft/mysql/log/:/var/log/mysql/
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    command:
      --default-authentication-plugin=mysql_native_password
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_general_ci
      --explicit_defaults_for_timestamp=true
      --lower_case_table_names=1
  1. 创建配置文件:
vim /opt/soft/mysql/conf/my.cnf

my.cnf配置:

[client]
#设置客户端默认字符集utf8mb4
default-character-set=utf8mb4
[mysql]
#设置服务器默认字符集为utf8mb4
default-character-set=utf8mb4
[mysqld]
# 解决MySQL8.0版本GROUP BY问题
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
# 限制数据导入导出操作的目录
secure_file_priv=/var/lib/mysql
# 允许任何IP访问
bind-address = 0.0.0.0
  1. 创建mysql容器
docker-compose up -d
  1. 开放端口(4306)
# 开放4306端口的命令
firewall-cmd --zone=public --add-port=4306/tcp --permanent

# 重启防火墙
firewall-cmd --reload

# 查看开放的端口
firewall-cmd --list-port

1.2 主从复制

1.2.1 主节点配置

  1. 修改my.cnf,在[mysqld]下添加以下内容:
#==================== 主从同步配置=========================
#节点id编号,各个mysql的server_id需要唯一
server_id=1
#[可选]指定binlog和binglog index的文件名
log_bin=mysql-bin
log_bin_index=binlog.index
#[可选]启用中继日志
relay-log=mysql-relay
#[可选] 单个binlog最大的文件大小,默认是1G
#max_binlog_size=500M
#[可选]设置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可选]0(默认)表示读写(主机),1表示只读(从机)
read-only=0
#[可选]设置日志文件保留的时长,单位是秒(默认不删除文件)
#binlog_expire_logs_seconds=6000
#[可选]设置不要复制的数据库
#binlog-ignore-db=test
#[可选]设置需要复制的数据库,默认全部记录。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要复制的主数据库名字
  1. 完整my.cnf
[client]
#设置客户端默认字符集utf8mb4
default-character-set=utf8mb4
[mysql]
#设置服务器默认字符集为utf8mb4
default-character-set=utf8mb4
[mysqld]
# 解决MySQL8.0版本GROUP BY问题
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
# 限制数据导入导出操作的目录
secure_file_priv=/var/lib/mysql
# 允许任何IP访问
bind-address = 0.0.0.0

#==================== 主从同步配置=========================
#节点id编号,各个mysql的server_id需要唯一
server_id=1
#[可选]指定binlog和binglog index的文件名
log_bin=mysql-bin
log_bin_index=binlog.index
#[可选]启用中继日志
relay-log=mysql-relay
#[可选] 单个binlog最大的文件大小,默认是1G
#max_binlog_size=500M
#[可选]设置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可选]0(默认)表示读写(主机),1表示只读(从机)
read-only=0
#[可选]设置日志文件保留的时长,单位是秒(默认不删除文件)
#binlog_expire_logs_seconds=6000
#[可选]设置不要复制的数据库
#binlog-ignore-db=test
#[可选]设置需要复制的数据库,默认全部记录。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要复制的主数据库名字
  1. 修改配置后重启数据库。

1.2.2 从节点配置

主从节点配置的差异:由于后续需要演示主从切换,所以无论是主从节点,都需要提前开启binlog和relaylog。故而这里主从配置基本一致,具体配置选项差异只有:server_idread-only选项。

  1. 修改my.cnf,在[mysqld]下添加以下内容:
#==================== 主从同步配置=========================
#节点id编号,各个mysql的server_id需要唯一
server_id=2
#[可选]指定binlog和binglog index的文件名
log_bin=mysql-log
log_bin_index=binlog.index
#[可选]启用中继日志
relay-log=mysql-relay
#[可选] 单个binlog最大的文件大小,默认是1G
#max_binlog_size=500M
#[可选]设置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可选]0(默认)表示读写(主机),1表示只读(从机)
read-only=1
#[可选]设置日志文件保留的时长,单位是秒(默认不删除文件)
#binlog_expire_logs_seconds=6000
#[可选]设置不要复制的数据库
#binlog-ignore-db=test
#[可选]设置需要复制的数据库,默认全部记录。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要复制的主数据库名字
  1. 完整my.cnf
[client]
#设置客户端默认字符集utf8mb4
default-character-set=utf8mb4
[mysql]
#设置服务器默认字符集为utf8mb4
default-character-set=utf8mb4
[mysqld]
# 解决MySQL8.0版本GROUP BY问题
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
# 限制数据导入导出操作的目录
secure_file_priv=/var/lib/mysql
# 允许任何IP访问
bind-address = 0.0.0.0

#==================== 主从同步配置=========================
#节点id编号,各个mysql的server_id需要唯一
server_id=2
#[可选]指定binlog和binglog index的文件名
log_bin=mysql-log
log_bin_index=binlog.index
#[可选]启用中继日志
relay-log=mysql-relay
#[可选] 单个binlog最大的文件大小,默认是1G
#max_binlog_size=500M
#[可选]设置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可选]0(默认)表示读写(主机),1表示只读(从机)
read-only=1
#[可选]设置日志文件保留的时长,单位是秒(默认不删除文件)
#binlog_expire_logs_seconds=6000
#[可选]设置不要复制的数据库
#binlog-ignore-db=test
#[可选]设置需要复制的数据库,默认全部记录。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要复制的主数据库名字
  1. 修改配置后重启数据库。

1.2.3 创建用于主从同步的用户

主、从节点都需要进行以下操作:
主节点:

# 进入容器
docker exec -it dc_mysql_master /bin/sh

# 登录
mysql -uroot -p

#创建slave1用户
CREATE USER 'slave1'@'%' IDENTIFIED BY '123456';

#给slave1用户授予数据同步的权限
GRANT replication slave on *.* to 'slave1'@'%';

#刷新权限
flush privileges;

从节点:

# 进入容器
docker exec -it dc_mysql_slave /bin/sh

# 登录
mysql -uroot -p

#创建slave1用户
CREATE USER 'slave1'@'%' IDENTIFIED BY '123456';

#给slave1用户授予数据同步的权限
GRANT replication slave on *.* to 'slave1'@'%';

#刷新权限
flush privileges;

1.2.4 开启主从同步

开启主从同步过程中,不要再去操作数据了,以免出现数据不一致情况。最好是数据库安装好,还未使用时旧开启主从同步。

  1. 查看主节点binlog执行位置(主节点192.168.1.151来执行以下命令):
# 进入容器
docker exec -it dc_mysql_master /bin/sh

# 登录
mysql -uroot -p

# 查看binglog执行位置
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000005 |      156 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
  1. 从节点开启主节点同步操作(从节点192.168.1.152来执行以下命令,注意端口):
# 进入容器
docker exec -it dc_mysql_slave /bin/sh

# 登录
mysql -uroot -p

#从节点设置主节点信息
CHANGE MASTER TO MASTER_HOST='192.168.1.151', MASTER_PORT=4306, MASTER_USER='slave1', MASTER_PASSWORD='123456', MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=156;
  
#从节点开启数据同步
start slave;

#查看主从数据同步情况
show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.151
                  Master_User: slave1
                  Master_Port: 4306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 1459
               Relay_Log_File: mysql-relay.000002
                Relay_Log_Pos: 1627
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 1459
              Relay_Log_Space: 1832
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 54db2059-a589-11ef-a788-0242ac120002
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
       Master_public_key_path:
        Get_master_public_key: 0
            Network_Namespace:
1 row in set (0.00 sec)

注意:从节点使用show slave status;命令检查主从同步时,Slave_IO_Running 和 Slave_SQL_Running 都为 Yes,并且 Last_Error 为空,说明主从同步成功启动并正常运行。
  如果发现其中有存在No的情况,那么检查防火墙是否关闭、主节点的slave1用户是否创建成功(可以在从节点上执行“mysql -h 主节点ip -P 4306 -u slave1 -p123456”看是否能登录到主节点上)。

docker exec -it dc_mysql_slave /bin/sh

mysql -h 192.168.1.151 -P 4306 -u slave1 -p123456;

如果发现其中有存在No的情况,进行以下排查:
1、先稍等一下,启动slave后,不一定马上就会变为Yes,可能还需要等一下
2、检查主从节点服务器的防火墙是否关闭
3、主节点的slave1用户是否创建成功(可以在从节点上执行“mysql -h 主节点ip -uslave1 -p123456”看是否能登录到主节点上)
4、如果发现是上面执行"change master to …"命令时参数写错导致的,那么在从节点上,先执行“stop slave;”停止主从,接着在主节点上重新执行“show master status;”来获取主节点最新binlog日志以及偏移位置,然后在从节点重新执行“change master to …”命令,最后在从节点上执行 “start slave;”。

  1. 取消主从复制:
    • 停止从服务器上的复制进程:
STOP SLAVE;

这个命令会停止从服务器上的复制线程,包括I/O线程和SQL线程。
- 移除从服务器上的复制配置:
如果你想要彻底取消主从同步,并且不再需要从服务器作为复制的一部分,你可以移除复制相关的配置。在从服务器上执行以下命令:

RESET MASTER;

这个命令会重置从服务器上的二进制日志,并清除所有与复制相关的配置信息。请注意,这个操作会丢失从服务器上所有的二进制日志文件,所以如果你还需要保留这些日志,请先进行备份。

1.2.4 主从同步验证

  1. 在主节点192.168.1.151上建库、建表、插入表数据,每一步操作都会实时同步到从节点上。
  2. 检查从节点192.168.1.152是否也都同步成功。

至此,主从同步就算开启成功了。

1.3 主从切换

概要:
其实就是将主从节点上的配置互换。

  1. 切换两个节点的读写权限;
  2. 切换两个节点的读写配置;

前提需要是主备模式,搭建过程见前两个小节。
服务器配置:

操作系统类型IPMySQL版本切换前切换后
7.9.2009192.168.1.1518.0.21
7.9.2009192.168.1.1528.0.21

1.3.1 主节点设置只读(在192.168.1.151上操作)

主节点设置只读模式,避免进行主从切换过程中还有写操作,导致切换后主从数据不一致问题。
注意:用SQL命令设置的只读模式是临时的,重启后失效。如果想让MySQL重启后也能生效,可以将read_only相关选项配置到my.conf文件里面。

# 进入容器
docker exec -it dc_mysql_master /bin/sh

# 登录
mysql -uroot -p

#查看只读相关配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | OFF   |
| super_read_only       | OFF   |
| transaction_read_only | OFF   |
+-----------------------+-------+
4 rows in set (0.00 sec)


#开启全局只读(包括普通用户、超级管理员root也都不能写)
set global super_read_only='on';

#开启全局只读(普通用户不能写),理论来说开启了super_read_only后,就无需设置当前参数
set global read_only='on';

#查看只读相关配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | ON    |
| super_read_only       | ON    |
| transaction_read_only | OFF   |
+-----------------------+-------+
4 rows in set (0.00 sec)

1.3.2 检查主从数据是否同步完毕(在192.168.1.152操作)

在从节点上执行"show slave status\G;"命令,查看控制台打印结果,要求参数值要和下面的一致:

  1. Slave_IO_Running: Yes
  2. Slave_SQL_Running: Yes
  3. Seconds_Behind_Master: 0
  4. Slave_SQL_Running_State: Replica has read all relay log; waiting for more updates
    注意:Slave_IO_Running和Slave_SQL_Running都为true代表主、从是正常同步,其次Seconds_Behind_Master为0代表当前主、从节点数据一致。

具体操作如下:

# 进入容器
docker exec -it dc_mysql_slave /bin/sh

# 登录
mysql -uroot -p

show slave status\G;

1.3.3 停止并重置从节点(在192.168.1.152操作)

# 进入容器
docker exec -it dc_mysql_slave /bin/sh

# 登录
mysql -uroot -p

#停止从节点
stop slave;

#重置掉从节点的相关主从同步信息,同时将relaylog文件进行删除重置
reset slave all;

1.3.4 修改原从节点的只读配置(在192.168.1.152操作)

注:用SQL命令设置的只读模式是临时的,重启后失效。如果想让mysql重启后也能生效,可以将read_only相关选项配置到my.conf文件里面或者从my.conf进行删除,以为默认就是只读关闭。

# 进入容器
docker exec -it dc_mysql_slave /bin/sh

# 登录
mysql -uroot -p

#查看只读相关配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | ON    |
| super_read_only       | OFF   |
| transaction_read_only | OFF   |
+-----------------------+-------+
4 rows in set (0.00 sec)


#关闭全局只读(让超级管理员root能进行写操作)
set global super_read_only='off';

#关闭全局只读(让普通用户也能写操作)
set global read_only='off';

#查看只读相关配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | OFF   |
| super_read_only       | OFF   |
| transaction_read_only | OFF   |
+-----------------------+-------+
4 rows in set (0.00 sec)

1.3.5 主从切换

进行主从同步的过程不要任何写操作,避免导致切换后主从数据不一致。

  1. 查看原从节点的最新日志以及偏移量(在192.168.1.152操作)。
# 进入容器
docker exec -it dc_mysql_slave /bin/sh

# 登录
mysql -uroot -p

show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-log.000001 |     3096 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
  1. 将原主节点 的主节点 设置为原从节点( 在192.168.1.151操作)
# 进入容器
docker exec -it dc_mysql_master /bin/sh

# 登录
mysql -uroot -p

#设置主节点信息(注意日志文件名称和1.2里的是不一样的,不是只修改索引)
CHANGE MASTER TO MASTER_HOST='192.168.1.152', MASTER_PORT=4306, MASTER_USER='slave1', MASTER_PASSWORD='123456', MASTER_LOG_FILE='mysql-log.000001', MASTER_LOG_POS=3096;

#开启slave
start slave;

#查看主从同步信息
show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.152
                  Master_User: slave1
                  Master_Port: 4306
                Connect_Retry: 60
              Master_Log_File: mysql-log.000001
          Read_Master_Log_Pos: 3096
               Relay_Log_File: mysql-relay.000002
                Relay_Log_Pos: 324
        Relay_Master_Log_File: mysql-log.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 3096
              Relay_Log_Space: 529
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 2
                  Master_UUID: 220f1fd5-a620-11ef-a9f5-0242ac120002
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
       Master_public_key_path:
        Get_master_public_key: 0
            Network_Namespace:
1 row in set (0.00 sec)

注意:使用show slave status;命令检查主从同步时,Slave_IO_Running 和 Slave_SQL_Running 都为 Yes,并且 Last_Error 为空,说明主从同步成功启动并正常运行。

1.3.6 验证

  1. 在新主节点(192.168.1.152)插入表数据。
  2. 在新从节点(192.168.1.151)查看表数据,发现在新主节点插入的数据已经自动同步到新从节点上了。

2 Redis

推荐使用Redis主从复制配合哨兵机制。

2.1 Redis主从复制

服务器配置:

操作系统类型IPRedis版本主从类型端口
7.9.2009192.168.1.1517.4.0主节点26379
7.9.2009192.168.1.1527.4.0从节点126379
7.9.2009192.168.1.1537.4.0从节点226379
  1. 各节点创建挂载目录并修改权限
mkdir -p /opt/soft/redis/redis_server/{conf,data,log}
chmod 777 /opt/soft/redis/redis_server/data
chmod 777 /opt/soft/redis/redis_server/conf
chmod 777 /opt/soft/redis/redis_server/log
  1. 各节点拉取镜像
docker pull redis:7.4.0
  1. 各节点docker-compose.yaml
    在各节点的opt/soft/redis/redis_server下创建docker-compose.yaml
cd /opt/soft/redis/redis_server

vim docker-compose.yaml

各节点docker-compose.yaml内容:

  • 主节点:
version: "3.1"
services:
  redis_master:
    container_name: redis_master
    restart: always
    image: redis:7.4.0
    ports:
      - 26379:6379
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/redis/redis_server/data:/data
      - /opt/soft/redis/redis_server/conf/redis.conf:/etc/redis/redis.conf
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    command: redis-server
    networks:
      default:
  • 从节点1:
version: "3.1"
services:
  redis_slave1:
    container_name: redis_slave1
    restart: always
    image: redis:7.4.0
    ports:
      - 26379:6379
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/redis/redis_server/data:/data
      - /opt/soft/redis/redis_server/conf/redis.conf:/etc/redis/redis.conf
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    command:  redis-server --slaveof 192.168.1.151 26379 --slave-announce-ip 192.168.1.152 --slave-announce-port 26379
    networks:
      default:
  • 从节点2:
version: "3.1"
services:
  redis_slave2:
    container_name: redis_slave2
    restart: always
    image: redis:7.4.0
    ports:
      - 26379:6379
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/redis/redis_server/data:/data
      - /opt/soft/redis/redis_server/conf/redis.conf:/etc/redis/redis.conf
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    command:  redis-server --slaveof 192.168.1.151 26379 --slave-announce-ip 192.168.1.153 --slave-announce-port 26379
    networks:
      default:
  1. 各节点创建配置文件:(配置都一样)
vim /opt/soft/redis/redis_server/conf/redis.conf

redis.conf 配置

# 设置 redis 连接密码
requirepass 123456

# 设置Redis主从复制时的认证密码。当Redis主节点设置了连接密码时,从节点也需要设置相同的密码才能成功连接到主节点并进行数据同步。如果主节点没有设置连接密码,从节点可以正常工作,但如果主节点设置了密码,从节点也需要设置相同的密码才能进行数据复制。
masterauth 123456

# 开启 AOF 持久化
appendonly yes

# AOF文件刷新的方式
# always 每提交一个修改命令都调用fsync刷新到AOF文件,非常非常慢,但也非常安全。
# everysec 每秒钟都调用fsync刷新到AOF文件,很快,但可能会丢失一秒以内的数据。
# no 依靠OS进行刷新,redis不主动刷新AOF,这样最快,但安全性就差。
appendfsync everysec

# 随着持久化的不断增多,AOF文件会越来越大,这个时候就需要AOF文件重写了。AOF文件重写
# 如果该参数取值为yes,那么在重写AOF文件时能提升性能,但可能在重写AOF文件时丢失数据。
# 如果取值为no,则不会丢失数据,但较取值为yes的性能可能会降低。默认取值是no。
no-appendfsync-on-rewrite no

# AOF文件重写
# 参数能指定重写的条件,默认是100,
# 即如果当前的AOF文件比上次执行重写时的文件大一倍时会再次触发重写操作。
# 如果该参数取值为0,则不会触发重写操作。
auto-aof-rewrite-percentage 100

# AOF文件重写
# 指定触发重写时AOF文件的大小,默认是64MB。
auto-aof-rewrite-min-size 64mb

# auto-aof-rewrite-percentage 和 auto-aof-rewrite-min-size 两个参数指定的重写条件是“And”的关系。
# 即只有当同时满足这两个条件时才会触发重写操作。

# 绑定redis服务器网卡IP,默认为127.0.0.1,即本地回环地址。
# 这样的话,访问redis服务只能通过本机的客户端连接,而无法通过远程连接。
# 如果bind选项为空的话,那会接受所有来自于可用网络接口的连接。
# bind 172.0.0.4 127.0.0.1
# bind 127.0.0.1 -::1

#默认yes,开启保护模式,限制为本地访问,改为no,允许接受其他主机连接此redis。
protected-mode no

# Redis key 过期事件监听
notify-keyspace-events Ex

# 在容器内运行 Redis 服务端,并加载指定的配置文件 /etc/redis/redis.conf
redis-server /etc/redis/redis.conf
  1. 各节点开放端口(26379)
# 开放26379端口的命令
firewall-cmd --zone=public --add-port=26379/tcp --permanent
# 重启防火墙
firewall-cmd --reload
# 查看开放的端口
firewall-cmd --list-port
  1. 各节点创建redis-server容器(先创建主节点,依次创建从节点)
docker-compose up -d
  1. 测试
    在主节点(192.168.1.151)添加数据,在从节点查看数据是否同步添加。

2.2 Redis哨兵机制

哨兵节点是特殊的redis节点,不存储数据,只做监控使用。
注意:如果出现以下错误,可能是网络原因造成的,将sentinel配置中的sentinel down-after-milliseconds mymaster参数调大些,比如60000。

1:X 20 Nov 2024 17:54:25.029 # +sdown sentinel 0a654e824df23df32c09d6830d7ac9ae3fa55bb6 192.168.1.152 36379 @ mymaster 192.168.1.151 26379
1:X 20 Nov 2024 17:54:26.637 # +sdown sentinel c620886db836f2515e4ede62b0f3a99c758dc045 192.168.1.153 36379 @ mymaster 192.168.1.151 26379
操作系统类型IPRedis版本节点类型端口
7.9.2009192.168.1.1517.4.0sentinel136379
7.9.2009192.168.1.1527.4.0sentinel236379
7.9.2009192.168.1.1537.4.0sentinel336379
  1. 各节点创建挂载目录
mkdir -p /opt/soft/redis/redis_sentinel/{conf,data,log}
chmod 777 /opt/soft/redis/redis_sentinel/data
chmod 777 /opt/soft/redis/redis_sentinel/conf
chmod 777 /opt/soft/redis/redis_sentinel/log
  1. 各节点拉取镜像
docker pull redis:7.4.0
  1. 各节点docker-compose.yaml(各节点的sentinel只有服务名称和容器名称不同,比如节点2、节点3的名称分别是redis_sentinel2、redis_sentinel3)
    在各节点的/opt/soft/redis/redis_sentinel下创建docker-compose.yaml
cd /opt/soft/redis/redis_sentinel

vim docker-compose.yaml

各节点docker-compose.yaml内容:

version: "3.1"
services:
  redis_sentinel1:
    container_name: redis_sentinel1
    restart: always
    image: redis:7.4.0
    ports:
      - 36379:36379
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/redis/redis_sentinel/data:/data
      - /opt/soft/redis/redis_sentinel/conf:/usr/local/etc/redis
      - /opt/soft/redis/redis_sentinel/log:/var/log
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    command: redis-sentinel /usr/local/etc/redis/sentinel.conf
    networks:
      - sentinel_network
    
networks:
  sentinel_network:
    driver: bridge
  1. 各节点创建配置文件:
vim /opt/soft/redis/redis_sentinel/conf/sentinel.conf

sentinel.conf配置:
注意:各个节点上的announce-ip可能会有差异,需要和当前节点的ip保持一致。

# 启动sentinel 端口号(注意容器和宿主机的端口映射)
port 36379
# 是否以守护进程的方式启动
daemonize no
# 启动的PID
pidfile /var/run/redis-sentinel.pid
# 日志文件命名及路径
logfile /var/log/redis-sentinel.log
# 数据写入目录
dir /data
# sentinel 监控主节点的名字mymaster,ip 192.168.1.151 端口26379,2的意思是 当有几个 sentinel 节点觉得这个master 有问题后,就进行故障转移
sentinel monitor mymaster 192.168.1.151 26379 2
# ping mymaster 10000 毫秒后,还ping不通,则主节点会被sentinel判定不可用,默认30秒。
# 因为sentinel是在不同服务器上,建议将时间调大些。
sentinel down-after-milliseconds mymaster 10000
# 当有新的master,老的slave会对新的进行复制,1相当于每次只能复制1个
sentinel parallel-syncs mymaster 1
# 故障转移的时间
sentinel failover-timeout mymaster 15000
# 不允许使用SENTINEL SET设置notification-script和client-reconfig-script
sentinel deny-scripts-reconfig yes
# mymaster服务使用的密码
sentinel auth-pass mymaster 123456

# 当前节点哨兵对外的IP(需与当前服务器的ip保持一致)
sentinel announce-ip 192.168.1.151/152/153
# 当前节点哨兵对外的port(需与port保持一致)
sentinel announce-port 36379
  1. 各节点开放端口(36379)
# 开放36379端口的命令
firewall-cmd --zone=public --add-port=36379/tcp --permanent
# 重启防火墙
firewall-cmd --reload
# 查看开放的端口
firewall-cmd --list-port
  1. 各节点创建redis-sentinel容器
docker-compose up -d
  1. 测试
    将主节点(192.168.1.151)停止后,查看哨兵是否判定故障并转移。
    查看哨兵节点的日志:
    在这里插入图片描述
    主节点从192.168.1.151变更为了192.168.1.153。

选举完成后sentinel.conf变化:
在这里插入图片描述

2.3 Redis集群

Redis集群必须要3个或以上的主节点,否则在创建集群时会失败,并且当存活的主节点数小于总节点数的一半时,整个集群就无法提供服务了。所以一共需要3个主节点,每个主节点至少1个副本,因此总共需要6个节点,3主3从。

创建集群时的提示信息:

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 3 nodes and 2 replicas per node.
*** At least 9 nodes are required.

同时端口也不能太大:

1:M 21 Nov 2024 11:18:29.904 # Redis port number too high. Cluster communication port is 10,000 port numbers higher than your Redis port. Your Redis port number must be 55535 or less.

服务器配置:
在这里插入图片描述

  1. 每个节点均创建挂载目录并修改权限
mkdir -p /opt/soft/redis/redis_cluster/node1/{conf,data,log}
chmod 777 /opt/soft/redis/redis_cluster/node1/data
chmod 777 /opt/soft/redis/redis_cluster/node1/conf
chmod 777 /opt/soft/redis/redis_cluster/node1/log

mkdir -p /opt/soft/redis/redis_cluster/node2/{conf,data,log}
chmod 777 /opt/soft/redis/redis_cluster/node2/data
chmod 777 /opt/soft/redis/redis_cluster/node2/conf
chmod 777 /opt/soft/redis/redis_cluster/node2/log
  1. 各节点拉取镜像
docker pull redis:7.4.0
  1. 各节点docker-compose.yaml
    在各节点的/opt/soft/redis/redis_cluster下创建docker-compose.yaml
cd /opt/soft/redis/redis_cluster

vim docker-compose.yaml

各节点docker-compose.yaml内容:(各节点的node只有服务名称和容器名称不同,比如节点2、节点3的名称分别是redis_cluster_node2、redis_cluster_node3)

version: "3.1"
services:
  redis_cluster_node1:
    container_name: redis_cluster_node1
    restart: always
    image: redis:7.4.0
    ports:
      - 16379:16379
      - 16380:16380
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/redis/redis_cluster/node1/data:/data
      - /opt/soft/redis/redis_cluster/node1/conf:/etc/redis
      - /opt/soft/redis/redis_cluster/node1/log:/var/log
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    command: redis-server /etc/redis/redis.conf
    networks:
      - redis_cluster
 
  redis_cluster_node2:
    container_name: redis_cluster_node2
    restart: always
    image: redis:7.4.0
    ports:
      - 26379:26379
      - 26380:26380
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/redis/redis_cluster/node2/data:/data
      - /opt/soft/redis/redis_cluster/node2/conf:/etc/redis
      - /opt/soft/redis/redis_cluster/node2/log:/var/log
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    command: redis-server /etc/redis/redis.conf
    networks:
      - redis_cluster

networks:
  redis_cluster:
    driver: bridge
  1. 各节点创建配置文件
    node1:
vim /opt/soft/redis/redis_cluster/node1/conf/redis.conf

redis.conf配置
注意:各个节点上的cluster-announce-ip可能会有差异,需要和当前节点的配置保持一致。

# 启动 cluster 节点的端口号(注意容器和宿主机的端口映射)
port 16379
# 设置 redis 连接密码
requirepass 123456
# 设置Redis主从复制时的认证密码。当Redis主节点设置了连接密码时,从节点也需要设置相同的密码才能成功连接到主节点并进行数据同步。如果主节点没有设置连接密码,从节点可以正常工作,但如果主节点设置了密码,从节点也需要设置相同的密码才能进行数据复制。
masterauth 123456
# 日志文件命名及路径
logfile /var/log/redis_cluster_node.log
# 是否开启集群功能
cluster-enabled yes
#由于集群通信端口默认为服务端口号6379+10000,即16379,和cluster-announce-bus-port保持一致。
cluster-port 16380
# 集群节点信息文件
cluster-config-file nodes.conf
# 集群节点连接超时时间(最好调大些)
cluster-node-timeout 30000
# 当前节点的IP(填写宿主机的IP)
cluster-announce-ip 192.168.1.151
# 当前节点的port(需和port保持一致)
cluster-announce-port 16379
# 集群节点总线端口
cluster-announce-bus-port 16380

node2:

vim /opt/soft/redis/redis_cluster/node2/conf/redis.conf

redis.conf配置
注意:各个服务器上的cluster-announce-ip可能会有差异,需要和当前服务器的配置保持一致。

# 启动 cluster 节点的端口号(注意容器和宿主机的端口映射)
port 26379
# 设置 redis 连接密码
requirepass 123456
# 设置Redis主从复制时的认证密码。当Redis主节点设置了连接密码时,从节点也需要设置相同的密码才能成功连接到主节点并进行数据同步。如果主节点没有设置连接密码,从节点可以正常工作,但如果主节点设置了密码,从节点也需要设置相同的密码才能进行数据复制。
masterauth 123456
# 是否开启集群功能
cluster-enabled yes
#由于集群通信端口默认为服务端口号6379+10000,即16379,和cluster-announce-bus-port保持一致。
cluster-port 26380
# 集群节点信息文件
cluster-config-file nodes.conf
# 集群节点连接超时时间(最好调大些)
cluster-node-timeout 30000
# 当前节点的IP(填写宿主机的IP)
cluster-announce-ip 192.168.1.151
# 当前节点的port(需和port保持一致)
cluster-announce-port 26379
# 集群节点总线端口
cluster-announce-bus-port 26380

  1. 开放端口(16379、16380、26379、26380)
# 开放46379、46380、56379、56380端口的命令
firewall-cmd --zone=public --add-port=16379/tcp --permanent
firewall-cmd --zone=public --add-port=16380/tcp --permanent
firewall-cmd --zone=public --add-port=26379/tcp --permanent
firewall-cmd --zone=public --add-port=26380/tcp --permanent
# 重启防火墙
firewall-cmd --reload
# 查看开放的端口
firewall-cmd --list-port
  1. 创建redis-cluster节点容器
docker-compose up -d

但这个时候这些容器都是相互独立的,没有构成集群。

  1. 构建集群
    构造集群分三步:
  • 节点之间通过握手建立连接;
  • 进行槽分配;
  • 指定节点主从关系;
    构造集群的方式有两种:
  • 自动创建法;
  • 手动创建法。简便起见,本文采用自动创建法。

创建集群:
随便选择一个节点进入容器。

docker exec -it redis_cluster_node1 /bin/sh

输入下面命令创建集群(由于我们只有6个节点,所以3主3从,每个主节点只有一个副本)

redis-cli -a 123456 --cluster create 192.168.1.151:16379 192.168.1.151:26379 192.168.1.152:16379 192.168.1.152:26379 192.168.1.153:16379 192.168.1.153:26379 --cluster-replicas 1

集群创建过程:
在这里插入图片描述


  1. 查看集群情况
    使用刚刚的节点登录到集群:
redis-cli -c -a 123456 -h 192.168.1.151 -p 16379
  • 查看集群状态:
# 集群状态
cluster info
192.168.1.151:16379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:383
cluster_stats_messages_pong_sent:385
cluster_stats_messages_sent:768
cluster_stats_messages_ping_received:380
cluster_stats_messages_pong_received:383
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:768
total_cluster_links_buffer_limit_exceeded:0

  • 集群节点情况
# 集群节点查询
cluster nodes
192.168.1.151:16379> cluster nodes
c79015f531577f01c6ac896bdc02b3e129a37576 192.168.1.151:16379@16380 myself,master - 0 0 1 connected 0-5460
02467576a212ee0dc777a61446140ef862b3904d 192.168.1.153:16379@16380 master - 0 1732168100000 5 connected 10923-16383
dfc77d0f115517957b6c752664b7eb8de6fbbc7d 192.168.1.153:26379@26380 slave e26c87f64ad0aa8211a5a933fb4a7f02adcb9fa3 0 1732168101249 3 connected
a94284a82c2fb4bc0a6d285ac408a021ba43b500 192.168.1.152:26379@26380 slave c79015f531577f01c6ac896bdc02b3e129a37576 0 1732168101000 1 connected
e26c87f64ad0aa8211a5a933fb4a7f02adcb9fa3 192.168.1.152:16379@16380 master - 0 1732168102268 3 connected 5461-10922
994eb9ca9f4871ada78a5491d2f1ccf7486c01bb 192.168.1.151:26379@26380 slave 02467576a212ee0dc777a61446140ef862b3904d 0 1732168099194 5 connected

  • 集群中各节点的数据库情况
[root@node01 log]# docker exec -it redis_cluster_node1 /bin/sh
# redis-cli -c -a 123456 -h 192.168.1.151 -p 16379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.1.151:16379> select 1
(error) ERR SELECT is not allowed in cluster mode
192.168.1.151:16379> select 2
(error) ERR SELECT is not allowed in cluster mode

可以发现,集群中的每个节点只有数据库0可以使用,其他索引的库均不允许使用,且每个节点中的数据均一致。通过Redis Desktop Manage可以更直观的发现。
在这里插入图片描述
至此,redis集群搭建成功!


2.4 主从复制、哨兵sentinel、集群的区别

  1. 主从复制:读写分离,备份,一个Master可以有多个Slaves。
  2. 哨兵sentinel:监控,自动转移,哨兵发现主服务器挂了后,就会从slave中重新选举一个主服务器。
  3. 集群:为了解决单机Redis容量有限的问题,将数据按一定的规则分配到多台机器,内存/QPS不受限于单机,可受益于分布式集群高扩展性。

对于要求较高的推荐使用集群加哨兵机制实现Redis的高可用。

3 MongoDB

8.0.3版本为例,目前是最新版本(2024年11月21日),8.0以下版本与较早版本有较大差异,以下操作均基于8.0以上。MongoDB官方文档:https://www.mongodb.com/zh-cn/docs/manual

3.1 MongoDB主从复制集群(不推荐)

不推荐的理由:缺乏自动故障转移、数据一致性保障和灵活的读负载均衡能力。
服务器配置:

操作系统类型IPRedis版本节点类型端口
7.9.2009192.168.1.1518.0.3主节点27017
7.9.2009192.168.1.1528.0.3从节点127017
7.9.2009192.168.1.1538.0.3从节点227017
  1. 创建挂载目录并修改权限
mkdir -p /opt/soft/mongo/{conf,data,log}
chmod 777 /opt/soft/mongo/data
chmod 777 /opt/soft/mongo/conf
chmod 777 /opt/soft/mongo/log
  1. 拉取镜像
docker pull mongo:8.0.3
  1. docker-compose.yaml
    在各节点的/opt/soft/mongo下创建docker-compose.yaml
cd /opt/soft/mongo

vim docker-compose.yaml

各节点docker-compose.yaml内容:(各节点的mongoDB只有服务名称和容器名称不同)

version: '3.1'
services:
  mongo_master:
    container_name: mongo_master
    restart: always
    image: mongo:8.0.3
    ports:
    - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/mongo/data:/data/db
      - /opt/soft/mongo/conf:/data/configdb
      - /opt/soft/mongo/log:/data/log
      - /opt/soft/mongo/conf/keyfile.key:/data/configdb/keyfile.key
    command: --config /data/configdb/mongod.conf --keyFile /data/configdb/keyfile.key # 配置文件
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    networks:
      default:
  1. 创建配置文件:
vim /opt/soft/mongo/conf/mongod.conf

mongod.conf配置

systemLog:
  destination: file
  path: /data/log/mongod.log # log path
  logAppend: true

storage:
  dbPath: /data/db

net:
  bindIp: 0.0.0.0
  port: 27017 # port

replication:
  replSetName: rs0 # 复制集名称

# processManagement: # 设置了该项会导致docker exec -it mongodb1 bash 进入容器后马上自动退出
#   fork: true
  1. 使用openssl创建集群认证的秘钥
    由于搭建副本集服务器,开启认证的同时,必须指定keyFile参数,节点之间的通讯基于该keyFile进行的。否则会启动失败。我们通过 docker logs 容器ID 查看docker日志可以发现启动的时候会报错:
BadValue: security.keyFile is required when authorization is enabled with replica sets

在/opt/soft/mongo/conf目录创建

# 创建秘钥
openssl rand -base64 756 > keyfile.key

# 给秘钥创建赋予权限
chmod 600 keyfile.key
# 修改秘钥文件组
chown 999:999 keyfile.key 

注意:各节点使用的秘钥文件需要是相同的,可创建完成后复制到每个节点的配置目录下。
6. 开放端口(27017)

# 开放27017端口的命令
firewall-cmd --zone=public --add-port=27017/tcp --permanent
# 重启防火墙
firewall-cmd --reload
# 查看开放的端口
firewall-cmd --list-port
  1. 创建mongoDB容器
docker-compose up -d
  1. 初始化副本集
  • 1 进入任一节点容器中
docker exec -it mongo_master bash
  • 2 查询MongoDB的命令所在目录(容器中的目录)
whereis mongosh
root@cf7b255556e8:/# whereis mongosh
mongosh: /usr/bin/mongosh
  • 3 使用命令进入MongoDB
/usr/bin/mongosh
  • 4 使用Mongo命令初始化副本集
# 验证身份
use admin;
db.auth("root", "123456");

# 初始化副本
rs.initiate({
  _id: "rs0",
  members: [
    { _id: 0, host: "192.168.1.151:27017" },
    { _id: 1, host: "192.168.1.152:27017" },
    { _id: 2, host: "192.168.1.153:27017" }
  ]
});

# 查看副本状态
rs.status();

# 切换到对应的库
use boatol;
# 给对应的库添加可读写权限的用户
db.createUser(
  {
    user: "admin",
    pwd: "123456",
    roles: [ { role: "readWrite", db: "boatol" } ]
  }
);
  • 5 执行及结果
# 进入任一节点容器中
[root@node01 mongo]# docker exec -it mongo_master bash

# 查询MongoDB的命令所在目录
root@cf7b255556e8:/# whereis mongosh
mongosh: /usr/bin/mongosh

# 使用命令进入MongoDB
root@cf7b255556e8:/# /usr/bin/mongosh
Current Mongosh Log ID: 673ef8760d928d1173c1c18b
Connecting to:          mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.3
Using MongoDB:          8.0.3
Using Mongosh:          2.3.3

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

# 验证身份
test> use admin;
switched to db admin
admin> db.auth("root", "yQcsaZBerNyccT1C");
{ ok: 1 }

# 初始化副本
admin> rs.initiate({
...   _id: "rs0",
...   members: [
...     { _id: 0, host: "192.168.1.151:27017" },
...     { _id: 1, host: "192.168.1.152:27017" },
...     { _id: 2, host: "192.168.1.153:27017" }
...   ]
... });
{ ok: 1 }

# 查看副本状态
rs0 [direct: other] admin> rs.status();
{
  set: 'rs0',
  date: ISODate('2024-11-21T09:09:41.060Z'),
  myState: 2,
  term: Long('0'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
    lastCommittedWallTime: ISODate('2024-11-21T09:09:34.702Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
    appliedOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
    durableOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
    writtenOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
    lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),
    lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),
    lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1732180174, i: 1 }),
  members: [
    {
      _id: 0,
      name: '192.168.1.151:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 408,
      optime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeDate: ISODate('2024-11-21T09:09:34.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeWrittenDate: ISODate('2024-11-21T09:09:34.000Z'),
      lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 0,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.1.152:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 6,
      optime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeDurable: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeWritten: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeDate: ISODate('2024-11-21T09:09:34.000Z'),
      optimeDurableDate: ISODate('2024-11-21T09:09:34.000Z'),
      optimeWrittenDate: ISODate('2024-11-21T09:09:34.000Z'),
      lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastHeartbeat: ISODate('2024-11-21T09:09:40.918Z'),
      lastHeartbeatRecv: ISODate('2024-11-21T09:09:40.643Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 0
    },
    {
      _id: 2,
      name: '192.168.1.153:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 6,
      optime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeDurable: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeWritten: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },
      optimeDate: ISODate('2024-11-21T09:09:34.000Z'),
      optimeDurableDate: ISODate('2024-11-21T09:09:34.000Z'),
      optimeWrittenDate: ISODate('2024-11-21T09:09:34.000Z'),
      lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z'),
      lastHeartbeat: ISODate('2024-11-21T09:09:40.923Z'),
      lastHeartbeatRecv: ISODate('2024-11-21T09:09:40.679Z'),
      pingMs: Long('1'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 0
    }
  ],
  ok: 1
}

# 切换到对应的库
rs0 [direct: other] admin> use boatol;
switched to db boatol

# 给对应的库添加可读写权限的用户
rs0 [direct: secondary] boatol> db.createUser(
...   {
...     user: "admin",
...     pwd: "123456",
...     roles: [ { role: "readWrite", db: "boatol" } ]
...   }
... );
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732180196, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('nB4GEHSgQJm2DmmGDS3BUXjvLuI=', 0),
      keyId: Long('7439657245354229766')
    }
  },
  operationTime: Timestamp({ t: 1732180196, i: 1 })
}
rs0 [direct: primary] boatol>
  1. 测试
    在主节点(192.168.1.151)创建集合后,可在192.168.1.152、192.168.1.153上查看结果。
    在这里插入图片描述
  2. 总结(主从复制存在的问题
  • 单点故障:Master节点故障时,没有自动故障转移机制。
  • 数据量有限:Slave节点数据通常不可写,限制了数据总量的增长。
  • 延迟和同步问题:Slave节点可能会落后于Master,导致数据延迟。
  • 资源利用率低:需要额外资源来运行Slave节点。

副本集的优点

  • 自动故障转移。
  • 读写分离,能提供更好的读扩展能力。
  • 副本集成员可配置为arbiter(仲裁),提供投票决定谁是主节点。

3.2 MongoDB副本集(Replica Set)集群

组成:副本集没有固定的主节点,整个集群会选出一个主节点,当其挂掉后,又在剩下的从节点中选中其他节点为主节点,副本集总有一个主节点和一个或多个备份节点。在出现故障时自动切换,实现故障转移,在实际生产中非常实用。

功能:

  1. 主节点负责处理读写操作。
  2. 从节点从主节点复制数据以保持同步,和读操作。
  3. 当主节点故障时,会自动进行选举,从从节点中选出新的主节点,保障系统的高可用性。

优点:

  1. 提供数据冗余,防止数据丢失。
  2. 实现读写分离,减轻主节点的读压力。

3.2.1 集群搭建

官方文档:https://www.mongodb.com/zh-cn/docs/rapid/administration/replica-set-deployment

服务器配置:

操作系统类型IPMongoDB版本节点类型端口
7.9.2009192.168.1.1518.0.3主节点27017
7.9.2009192.168.1.1518.0.3从节点137017
7.9.2009192.168.1.1518.0.3从节点247017
7.9.2009192.168.1.1518.0.3仲裁节点17017

192.168.1.151服务器上的操作。

  1. 创建挂载目录并修改权限
# 主节点
mkdir -p /opt/soft/mongo/mongo_master/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_master/data
chmod 777 /opt/soft/mongo/mongo_master/conf
chmod 777 /opt/soft/mongo/mongo_master/log

# 从节点1
mkdir -p /opt/soft/mongo/mongo_slave1/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_slave1/data
chmod 777 /opt/soft/mongo/mongo_slave1/conf
chmod 777 /opt/soft/mongo/mongo_slave1/log

# 从节点2
mkdir -p /opt/soft/mongo/mongo_slave2/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_slave2/data
chmod 777 /opt/soft/mongo/mongo_slave2/conf
chmod 777 /opt/soft/mongo/mongo_slave2/log

# 仲裁节点
mkdir -p /opt/soft/mongo/mongo_arbiter/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_arbiter/data
chmod 777 /opt/soft/mongo/mongo_arbiter/conf
chmod 777 /opt/soft/mongo/mongo_arbiter/log
  1. 拉取镜像
docker pull mongo:8.0.3
  1. docker-compose.yaml
    /opt/soft/mongo下创建docker-compose.yaml
cd /opt/soft/mongo

vim docker-compose.yaml

docker-compose.yaml内容:(注意时区)

version: '3.1'
services:
  mongo_master:
    container_name: mongo_master
    restart: always
    image: mongo:8.0.3
    ports:
    - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/mongo/mongo_master/data:/data/db
      - /opt/soft/mongo/mongo_master/conf:/data/configdb
      - /opt/soft/mongo/mongo_master/log:/data/log
      - /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.key
    command: --config /data/configdb/mongod.conf
    environment:
      TZ: Asia/Shanghai  # 设置时区为上海时区
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    networks:
      - mongo
      
  mongo_slave1:
    container_name: mongo_slave1
    restart: always
    image: mongo:8.0.3
    ports:
    - 37017:27017
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/mongo/mongo_slave1/data:/data/db
      - /opt/soft/mongo/mongo_slave1/conf:/data/configdb
      - /opt/soft/mongo/mongo_slave1/log:/data/log
      - /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.key
    command: --config /data/configdb/mongod.conf
    environment:
      TZ: Asia/Shanghai  # 设置时区为上海时区
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    networks:
      - mongo
      
  mongo_slave2:
    container_name: mongo_slave2
    restart: always
    image: mongo:8.0.3
    ports:
    - 47017:27017
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/mongo/mongo_slave2/data:/data/db
      - /opt/soft/mongo/mongo_slave2/conf:/data/configdb
      - /opt/soft/mongo/mongo_slave2/log:/data/log
      - /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.key
    command: --config /data/configdb/mongod.conf
    environment:
      TZ: Asia/Shanghai  # 设置时区为上海时区
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    networks:
      - mongo
      
  mongo_arbiter:
    container_name: mongo_arbiter
    restart: always
    image: mongo:8.0.3
    ports:
    - 17017:27017
    volumes:
      - /etc/localtime:/etc/localtime:ro # 将外边时间直接挂载到容器内部,权限只读
      - /opt/soft/mongo/mongo_arbiter/data:/data/db
      - /opt/soft/mongo/mongo_arbiter/conf:/data/configdb
      - /opt/soft/mongo/mongo_arbiter/log:/data/log
      - /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.key
    command: --config /data/configdb/mongod.conf
    environment:
      TZ: Asia/Shanghai  # 设置时区为上海时区
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 5
    networks:
      - mongo

networks:
  mongo:
    driver: bridge
  1. 使用openssl创建集群认证的秘钥
    开启认证的同时,必须指定keyFile参数,节点之间的通讯基于该keyFile进行的。否则会启动失败。我们通过 docker logs 容器ID 查看docker日志可以发现启动的时候会报错:
BadValue: security.keyFile is required when authorization is enabled with replica sets

在/opt/soft/mongo目录操作

# 创建秘钥
openssl rand -base64 756 > keyfile.key
# 给秘钥创建赋予权限
chmod 600 keyfile.key
# 修改秘钥文件组
chown 999:999 keyfile.key

注意:服务器各节点使用的秘钥文件需要是相同的。
5. 服务器各节点创建配置文件:(各节点以及仲裁节点配置均一样)

vim /opt/soft/mongo/mongo_master/conf/mongod.conf

mongod.conf配置(各节点在容器内使用的是相同端口)

# 安全认证
security:
  keyFile: /data/configdb/keyfile.key
  authorization: enabled

# 系统日志
systemLog:
  destination: file
  path: /data/log/mongod.log # log path
  logAppend: true

# 数据存储位置
storage:
  dbPath: /data/db

# 网络
net:
  bindIp: 0.0.0.0
  port: 27017 # port

# 副本名称
replication:
  replSetName: rs # 副本集的名称
  
# processManagement: # 设置了该项会导致docker exec -it mongodb1 bash 进入容器后马上自动退出
#   fork: true

复制到从节点1、2、仲裁节点的配置目录:

cp /opt/soft/mongo/mongo_master/conf/mongod.conf /opt/soft/mongo/mongo_slave1/conf/
cp /opt/soft/mongo/mongo_master/conf/mongod.conf /opt/soft/mongo/mongo_slave2/conf/
cp /opt/soft/mongo/mongo_master/conf/mongod.conf /opt/soft/mongo/mongo_arbiter/conf/
  1. 开放端口(27017)
# 开放17017、27017、37017、47017端口的命令
firewall-cmd --zone=public --add-port=17017/tcp --permanent
firewall-cmd --zone=public --add-port=27017/tcp --permanent
firewall-cmd --zone=public --add-port=37017/tcp --permanent
firewall-cmd --zone=public --add-port=47017/tcp --permanent
# 重启防火墙
firewall-cmd --reload
# 查看开放的端口
firewall-cmd --list-port
  1. 创建mongo容器
docker-compose up -d
  1. 进入主节点容器内,初始化集群副本集
# 进入容器内部
docker exec -it mongo_master bash

#(不用操作)启动替换配置服务器
# ./usr/bin/mongod --configsvr --replSet configReplSet --bind_ip 192.168.1.151:27017;

# 进入数据库,初始化副本集,默认配置服务器集群的端口为27017
./usr/bin/mongosh --port 27017
  1. 初始化配置服务器的副本集群

    注意:通过docker部署的mongoDB,在主节点内初始化副本集时,不要使用默认的配置来初始化副本集rs.initiate();,因为使用默认的配置来初始化副本后,该节点在副本集中members列表中的地址信息host: '57ea58dc33df:27017'是该节点的容器ID+端口,而不是域名+端口或者IP+端口。

这样会导致客户端在连接副本集时发现该节点地址也是容器ID+端口,而客户端如果不再同一docker网络下,客户端会无法得知该节点的类型(主节点、从节点、还是仲裁节点),进而无法使用mongoDB的副本集。

比如Golang客户端连接mongoDB副本集群时报的错:

server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 192.168.1.151:37017, Type: RSSecondary, Average RTT: 1208216 }, { Addr: 192.168.1.151:47017, Type: RSSecondary, Average RTT: 1641358 }, { Addr: 57ea58dc33df:27017, Type: Unknown, Last error: dial tcp: lookup 57ea58dc33df: no such host }, { Addr: 192.168.1.151:17017, Type: RSArbiter, Average RTT: 2135596 }, ] }

推荐如下方式:

rs.initiate({
  _id: "rs",
  members: [
    { _id: 0, host: "192.168.1.151:27017" }
  ]
});

具体操作:

# 验证身份
use admin;
db.auth("root", "123456");

# 使用默认的配置来初始化副本集
# rs.initiate();

# _id: “rs” :副本集的配置数据存储的主键值,默认就是副本集的名字。
# 有域名时尽量使用域名,避免因IP地址变更而不断更新配置。从 MongoDB 5.0 开始,仅配置了 IP 地址的节点可能无法通过启动验证,因而不会启动。
rs.initiate({
  _id: "rs",
  members: [
    { _id: 0, host: "192.168.1.151:27017" }
  ]
});

# (不用操作)给主节点添加从节点到副本集
# rs.add({ host: "192.168.1.151:37017"});
# rs.add({ host: "192.168.1.151:47017"});

# (不用操作)检查副本集成员的状态
# rs.status();

#(不用操作)从配置服务器副本集中移除要替换的节点。
#rs.remove("192.168.1.151:27017");
#rs.remove("192.168.1.151:37017");
#rs.remove("192.168.1.151:47017");

# 添加用户,示例
# db.createUser(
#  {
#    user: "myTester",
#    pwd:  passwordPrompt(),   // or cleartext password
#    roles: [ { role: "readWrite", db: "test" },
#             { role: "read", db: "reporting" } ]
#  }
#);
# passwordPrompt() 方法会提示输入密码。也可以直接将密码指定为字符串。推荐使用 passwordPrompt() 方法,避免将密码显示在屏幕上,也避免可能将密码泄露到 Shell 历史记录中。

# (不用操作)实操
# db.createUser(
#   {
#     user: "admin",
#     pwd:  passwordPrompt(),   // or cleartext password
#     roles: [ { role: "root", db: "admin" },
#              { role: "read", db: "test" } ]
#   }
# );
  1. 查看副本集的配置内容
#(未登录时)验证身份
use admin;
db.auth("root", "123456");

rs.config();

# s.conf(configuration);
# rs.config() 是该方法的别名。
# configuration:可选,如果没有配置,则使用默认主节点配置。

rs [direct: secondary] admin> rs.config();
{
  _id: 'rs',
  version: 1,
  term: 1,
  members: [
    {
      _id: 0,
      host: '192.168.1.151:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    }
  ],
  protocolVersion: Long('1'),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId('674eb939cbdc59c4b7bbb026')
  }
}

说明:

  • _id: “rs” :副本集的配置数据存储的主键值,默认就是副本集的名字。
  • members:副本集成员数组,此时只有一个: “host” : “192.168.1.151:27017” ,该成员不是仲裁节点: “arbiterOnly” : false ,优先级(权重值): “priority” : 1。
  • settings:副本集的参数配置。
  1. 查看副本集状态
#(未登录时)验证身份
use admin;
db.auth("root", "123456");

rs.status();

rs [direct: primary] admin> rs.status();
{
  set: 'rs',
  date: ISODate('2024-12-03T07:56:03.247Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-12-03T07:55:54.247Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-12-03T07:55:54.247Z'),
    lastDurableWallTime: ISODate('2024-12-03T07:55:54.247Z'),
    lastWrittenWallTime: ISODate('2024-12-03T07:55:54.247Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1733212524, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.1.151:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 326,
      optime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T07:55:54.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-12-03T07:55:54.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T07:55:54.247Z'),
      lastDurableWallTime: ISODate('2024-12-03T07:55:54.247Z'),
      lastWrittenWallTime: ISODate('2024-12-03T07:55:54.247Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1733212474, i: 2 }),
      electionDate: ISODate('2024-12-03T07:54:34.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1733212554, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('xlhiAyf3P4i7iORTjEDG8iEIJhU=', 0),
      keyId: Long('7444090892849250310')
    }
  },
  operationTime: Timestamp({ t: 1733212554, i: 1 })
}

说明:

  • set: “rs” :副本集的名字。
  • myState: 1:说明状态正常。
  • members:副本集成员数组,此时只有一个: “name” : “192.168.1.151:27017” ,该成员的角色是 “stateStr” : “PRIMARY”, 该节点是健康的: “health” : 1。
  1. 添加副本从节点
    在主节点添加从节点,将其他成员加入到副本集。
#(未登录时)验证身份
use admin;
db.auth("root", "123456");

# 给主节点添加从节点1、从节点2到副本集
rs.add({ host: "192.168.1.151:37017"});
rs.add({ host: "192.168.1.151:47017"});

# 添加从节点1
rs [direct: primary] admin> rs.add({ host: "192.168.1.151:37017"});
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1733212800, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('q6PXGMdN9gqxcY21GayEj3J1opw=', 0),
      keyId: Long('7444090892849250310')
    }
  },
  operationTime: Timestamp({ t: 1733212800, i: 1 })
}

# 添加从节点2
rs [direct: primary] admin> rs.add({ host: "192.168.1.151:47017"});
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1733212808, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('vIil05ua/cb3Rd3s/jXTCKfgnCY=', 0),
      keyId: Long('7444090892849250310')
    }
  },
  operationTime: Timestamp({ t: 1733212808, i: 1 })
}
  1. 再次查看副本集状态
#(未登录时)验证身份
use admin;
db.auth("root", "123456");

rs.status();

rs [direct: primary] admin> rs.status();
{
  set: 'rs',
  date: ISODate('2024-12-03T08:00:59.731Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-12-03T08:00:54.269Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),
    lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),
    lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1733212824, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.1.151:27017',
      health: 1, # 状态
      state: 1,
      stateStr: 'PRIMARY', # 节点类型
      uptime: 622,
      optime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:00:54.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-12-03T08:00:54.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1733212474, i: 2 }),
      electionDate: ISODate('2024-12-03T07:54:34.000Z'),
      configVersion: 5,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.1.151:37017',
      health: 1, # 状态
      state: 2,
      stateStr: 'SECONDARY', # 节点类型
      uptime: 59,
      optime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeWritten: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:00:54.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:00:54.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:00:54.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastHeartbeat: ISODate('2024-12-03T08:00:58.771Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:00:58.768Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.1.151:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 5,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.1.151:47017',
      health: 1, # 状态
      state: 2,
      stateStr: 'SECONDARY', # 节点类型
      uptime: 51,
      optime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeWritten: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:00:54.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:00:54.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:00:54.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z'),
      lastHeartbeat: ISODate('2024-12-03T08:00:58.771Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:00:59.243Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.1.151:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 5,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1733212854, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('pDGjP0uTw7qsQYYwfVZF/3r71hs=', 0),
      keyId: Long('7444090892849250310')
    }
  },
  operationTime: Timestamp({ t: 1733212854, i: 1 })
}
  1. 添加仲裁节点
    因为添加仲裁节点可能一直无响应,或者返回错误信息:
“errmsg” : “Reconfig attempted to install a config that would change the implicit default write concern. Use the setDefaultRWConcern command to set a cluster-wide write concern and try the reconfig again.”

解决办法在主节点设置:(设置全局默认写入关注点)

#(未登录时)验证身份
use admin;
db.auth("root", "123456");

# 设置全局默认写入关注点
db.adminCommand({
  "setDefaultRWConcern" : 1,
  "defaultWriteConcern" : {
    "w" : 2
  }
});

执行添加仲裁节点的操作:

#(未登录时)验证身份
use admin;
db.auth("root", "123456");

# 添加仲裁节点
rs.addArb("192.168.1.151:17017");

添加完成后查看副本集状态,可以发现又多了一个ARBITER节点(仲裁节点)。

#(未登录时)验证身份
use admin;
db.auth("root", "123456");

rs.status();

rs [direct: primary] admin> rs.status();
{
  set: 'rs',
  date: ISODate('2024-12-03T08:03:34.865Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 3,
  writeMajorityCount: 3,
  votingMembersCount: 4,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-12-03T08:03:25.246Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),
    lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),
    lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1733213005, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.1.151:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 777,
      optime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:03:25.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-12-03T08:03:25.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1733212474, i: 2 }),
      electionDate: ISODate('2024-12-03T07:54:34.000Z'),
      configVersion: 6,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.1.151:37017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 214,
      optime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeWritten: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:03:25.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:03:25.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:03:25.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastHeartbeat: ISODate('2024-12-03T08:03:33.293Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:03:33.343Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.1.151:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.1.151:47017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 206,
      optime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeWritten: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:03:25.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:03:25.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:03:25.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z'),
      lastHeartbeat: ISODate('2024-12-03T08:03:33.293Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:03:33.284Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.1.151:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    },
    {
      _id: 3,
      name: '192.168.1.151:17017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 9,
      lastHeartbeat: ISODate('2024-12-03T08:03:33.556Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:03:33.556Z'),
      pingMs: Long('1'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1733213005, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('yythHsQYHEnQfwDiPHqITajYyF0=', 0),
      keyId: Long('7444090892849250310')
    }
  },
  operationTime: Timestamp({ t: 1733213005, i: 1 })
}

3.2.2 测试

  1. 副本节点故障测试
    关闭37017副本节点可以发现,主节点和仲裁节点对 37017 的心跳失败。因为主节点还在,因此,没有触发投票选举。如果此时,在主节点写入数据。

进入主节点容器内,查看副本集状态:

# 进入容器内部
docker exec -it mongo_master bash

# 进入数据库,初始化副本集,默认配置服务器集群的端口为27017
./usr/bin/mongosh --port 27017

#(未登录时)验证身份
use admin;
db.auth("root", "123456");

# 查看副本集状态
rs.status();

rs [direct: primary] admin> rs.status();
{
  set: 'rs',
  date: ISODate('2024-12-03T08:07:43.663Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 3,
  writeMajorityCount: 3,
  votingMembersCount: 4,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1733213244, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-12-03T08:07:24.335Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1733213244, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-12-03T08:07:34.339Z'),
    lastDurableWallTime: ISODate('2024-12-03T08:07:34.339Z'),
    lastWrittenWallTime: ISODate('2024-12-03T08:07:34.339Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1733213244, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.1.151:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 1026,
      optime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:07:34.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-12-03T08:07:34.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:07:34.339Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:07:34.339Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:07:34.339Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1733212474, i: 2 }),
      electionDate: ISODate('2024-12-03T07:54:34.000Z'),
      configVersion: 6,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.1.151:37017',
      health: 0,
      state: 8,
      stateStr: '(not reachable/healthy)',
      uptime: 0,
      optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
      optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
      optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
      optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
      optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
      optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:07:24.335Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:07:24.335Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:07:24.335Z'),
      lastHeartbeat: ISODate('2024-12-03T08:07:41.782Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:07:27.761Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: 'Error connecting to 192.168.1.151:37017 :: caused by :: onInvoke :: caused by :: Connection refused',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.1.151:47017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 455,
      optime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
      optimeWritten: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-12-03T08:07:34.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:07:34.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:07:34.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:07:34.339Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:07:34.339Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:07:34.339Z'),
      lastHeartbeat: ISODate('2024-12-03T08:07:41.777Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:07:41.719Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.1.151:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    },
    {
      _id: 3,
      name: '192.168.1.151:17017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 258,
      lastHeartbeat: ISODate('2024-12-03T08:07:42.086Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:07:42.095Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1733213254, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('Bp2qfzf57TFa56i5x5a6j97llvQ=', 0),
      keyId: Long('7444090892849250310')
    }
  },
  operationTime: Timestamp({ t: 1733213254, i: 1 })
}

重新启动从节点,会发现,在主节点上的操作,会自动同步给从节点,从而保持数据的一致性。

  1. 主节点故障测试
    关闭主节点,可以发现,从节点和仲裁节点对 27017 的心跳失败,当失败超过 10 秒,此时因为没有主节点了,会自动发起投票。

    进入从节点1容器内,查看副本集状态,可以看到27017已经停止,且47017已经成为主节点,只能在47017节点上操作数据。

# 进入容器内部
docker exec -it mongo_slave1 bash

# 进入数据库,初始化副本集,默认配置服务器集群的端口为27017
./usr/bin/mongosh --port 27017

#(未登录时)验证身份
use admin;
db.auth("root", "123456");

# 查看副本集状态
rs.status();

  members: [
    {
      _id: 0,
      name: '192.168.1.151:27017',
      health: 0,
      state: 8,
      stateStr: '(not reachable/healthy)',
      uptime: 0,
      optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
      optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
      optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
      optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
      optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
      optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:10:14.356Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:10:14.356Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:10:14.356Z'),
      lastHeartbeat: ISODate('2024-12-03T08:10:31.467Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:10:26.673Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: 'Error connecting to 192.168.1.151:27017 :: caused by :: onInvoke :: caused by :: Connection refused',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    },
    {
      _id: 1,
      name: '192.168.1.151:37017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 79,
      optime: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },
      optimeDate: ISODate('2024-12-03T08:10:26.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },
      optimeWrittenDate: ISODate('2024-12-03T08:10:26.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:10:26.929Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:10:26.929Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:10:26.929Z'),
      syncSourceHost: '192.168.1.151:47017',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 6,
      configTerm: 3,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: '192.168.1.151:47017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 76,
      optime: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },
      optimeDurable: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },
      optimeWritten: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },
      optimeDate: ISODate('2024-12-03T08:10:26.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:10:26.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:10:26.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:10:26.929Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:10:26.929Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:10:26.929Z'),
      lastHeartbeat: ISODate('2024-12-03T08:10:31.465Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:10:30.957Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1733213426, i: 1 }),
      electionDate: ISODate('2024-12-03T08:10:26.000Z'),
      configVersion: 6,
      configTerm: 3
    },
    {
      _id: 3,
      name: '192.168.1.151:17017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 76,
      lastHeartbeat: ISODate('2024-12-03T08:10:31.460Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:10:30.963Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 6,
      configTerm: 3
    }
  ]

重启27017节点,再进入从节点1容器内,查看副本集状态,可以看到重启的27017节点变成了从节点,只能在47017节点上操作数据。

# 进入容器内部
docker exec -it mongo_slave1 bash

# 进入数据库,初始化副本集,默认配置服务器集群的端口为27017
./usr/bin/mongosh --port 27017

#(未登录时)验证身份
use admin;
db.auth("root", "123456");

# 查看副本集状态
rs.status();

members: [
    {
      _id: 0,
      name: '192.168.1.151:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 10,
      optime: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },
      optimeDurable: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },
      optimeWritten: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },
      optimeDate: ISODate('2024-12-03T08:12:26.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:12:26.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:12:26.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:12:36.945Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:12:36.945Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:12:36.945Z'),
      lastHeartbeat: ISODate('2024-12-03T08:12:36.898Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:12:36.347Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.1.151:37017',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 6,
      configTerm: 3
    },
    {
      _id: 1,
      name: '192.168.1.151:37017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 204,
      optime: { ts: Timestamp({ t: 1733213556, i: 1 }), t: Long('3') },
      optimeDate: ISODate('2024-12-03T08:12:36.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1733213556, i: 1 }), t: Long('3') },
      optimeWrittenDate: ISODate('2024-12-03T08:12:36.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:12:36.945Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:12:36.945Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:12:36.945Z'),
      syncSourceHost: '192.168.1.151:47017',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 6,
      configTerm: 3,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: '192.168.1.151:47017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 200,
      optime: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },
      optimeDurable: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },
      optimeWritten: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },
      optimeDate: ISODate('2024-12-03T08:12:26.000Z'),
      optimeDurableDate: ISODate('2024-12-03T08:12:26.000Z'),
      optimeWrittenDate: ISODate('2024-12-03T08:12:26.000Z'),
      lastAppliedWallTime: ISODate('2024-12-03T08:12:26.945Z'),
      lastDurableWallTime: ISODate('2024-12-03T08:12:26.945Z'),
      lastWrittenWallTime: ISODate('2024-12-03T08:12:26.945Z'),
      lastHeartbeat: ISODate('2024-12-03T08:12:35.744Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:12:35.191Z'),
      pingMs: Long('1'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1733213426, i: 1 }),
      electionDate: ISODate('2024-12-03T08:10:26.000Z'),
      configVersion: 6,
      configTerm: 3
    },
    {
      _id: 3,
      name: '192.168.1.151:17017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 200,
      lastHeartbeat: ISODate('2024-12-03T08:12:35.694Z'),
      lastHeartbeatRecv: ISODate('2024-12-03T08:12:35.190Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 6,
      configTerm: 3
    }
  ]

3.2.3 Navicat Premium连接mongoDB副本集群

如图配置即可:

  1. 连接:Replica Set
  2. 成员:副本集各节点地址
  3. 读偏好:Primary
  4. 复制集:(选填)
  5. 验证方式:Password
    在这里插入图片描述

3.2.4 Golang代码连接mongoDB副本集群

如果只连接mongoDB副本集的主节点uri需要添加?connect=direct,完整配置:

Uri: mongodb://192.168.1.151:27017/?connect=direct # mongoDB集群模式时只连接主节点需要加 /?connect=direct

如果连接mongoDB副本集群:

Uri: mongodb://192.168.1.151:27017,192.168.1.151:37017,192.168.1.151:47017/?replicaSet=rs

3.3 MongoDB分片集群(暂未搭建成功)


http://www.kler.cn/a/525842.html

相关文章:

  • 【Leetcode刷题记录】166. 分数到小数
  • 批量卸载fnm中已经安装的所有版本
  • C++入门(1)
  • 大屏 UI 设计风格的未来趋势
  • AI大模型开发原理篇-2:语言模型雏形之词袋模型
  • WebSocket 详解:全双工通信的实现与应用
  • 【信息系统项目管理师-选择真题】2009上半年综合知识答案和详解
  • 【游戏设计原理】94 - 解决问题的方法
  • 赚钱的究极认识
  • INCOSE需求编写指南-附录 D: 交叉引用矩阵
  • Vscode编辑器下 Markdown无法显示图片
  • Docker小游戏 | 使用Docker部署RPG网页小游戏
  • mysql_init和mysql_real_connect的形象化认识
  • OSPF邻接关系无法建立之MTU问题
  • Day30-【AI思考】-错题分类进阶体系——12维错误定位模型
  • 前馈神经网络——最基本的神经网络架构
  • 【Leetcode 每日一题】350. 两个数组的交集 II
  • 昆虫机器人:从仿生设计到未来应用
  • SpringBoot Web开发(SpringMVC)
  • CF EDU ROUND 172
  • unity学习24:场景scene相关生成,加载,卸载,加载进度,异步加载场景等
  • 前端进阶:深度剖析预解析机制
  • 电梯系统的UML文档13
  • 跟李沐学AI:视频生成类论文精读(Movie Gen、HunyuanVideo)
  • python学opencv|读取图像(五十一)使用修改图像像素点上BGR值实现图像覆盖效果
  • java求职学习day19