Hive整合MySQL
目录
Hive整合MySQL
安装MySQL
1)上传MySQL安装包以及MySQL驱动jar包
2)解压MySQL安装包
3)卸载系统自带的mariadb
4)安装MySQL依赖
5)安装mysql-client
6)安装mysql-server
7)启动MySQL
8)查看MySQL密码
配置MySQL
1)用刚刚查到的密码进入MySQL(如果报错,给密码加单引号)
2)设置复杂密码(由于MySQL密码策略,此密码必须足够复杂)
3)更改MySQL密码策略
4)设置简单好记的密码
5)进入MySQL库
6)查询user表
7)修改user表,把Host表内容修改为%
8)刷新
9)退出
卸载MySQL说明
配置Hive元数据存储到MySQL
配置元数据到MySQL
验证元数据是否配置成功
查看MySQL中的元数据
Hive整合MySQL
安装MySQL
1)上传MySQL安装包以及MySQL驱动jar包
mysql-5.7.28-1.el7.x86_64.rpm-bundle.tar
mysql-connector-java-5.1.37.jar
2)解压MySQL安装包
[lzl@hadoop12software]$ mkdir mysql_lib
[lzl@hadoop12software]$ tar -xf mysql-5.7.28-1.el7.x86_64.rpm-bundle.tar -C mysql_lib/
3)卸载系统自带的mariadb
[lzl@hadoop12~]$ sudo rpm -qa | grep mariadb | xargs sudo rpm -e --nodeps
4)安装MySQL依赖
[lzl@hadoop12software]$ cd mysql_lib
[lzl@hadoop12mysql_lib]$ sudo rpm -ivh mysql-community-common-5.7.28-1.el7.x86_64.rpm
[lzl@hadoop12mysql_lib]$ sudo rpm -ivh mysql-community-libs-5.7.28-1.el7.x86_64.rpm
[lzl@hadoop12mysql_lib]$ sudo rpm -ivh mysql-community-libs-compat-5.7.28-1.el7.x86_64.rpm
5)安装mysql-client
[lzl@hadoop12mysql_lib]$ sudo rpm -ivh mysql-community-client-5.7.28-1.el7.x86_64.rpm
6)安装mysql-server
[lzl@hadoop12mysql_lib]$ sudo rpm -ivh mysql-community-server-5.7.28-1.el7.x86_64.rpm
注意:若出现以下错误
warning: 05_mysql-community-server-5.7.16-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
error: Failed dependencies:
libaio.so.1()(64bit) is needed by mysql-community-server-5.7.16-1.el7.x86_64
解决办法:
[lzl@hadoop12software]$ sudo yum -y install libaio
7)启动MySQL
[lzl@hadoop12software]$ sudo systemctl start mysqld
8)查看MySQL密码
[lzl@hadoop12software]$ sudo cat /var/log/mysqld.log | grep password
配置MySQL
配置主要是root用户 + 密码,在任何主机上都能登录MySQL数据库。
1)用刚刚查到的密码进入MySQL(如果报错,给密码加单引号)
[lzl@hadoop12software]$ mysql -uroot -p'password'
2)设置复杂密码(由于MySQL密码策略,此密码必须足够复杂)
mysql> set password=password("Qs23=zs32");
3)更改MySQL密码策略
mysql> set global validate_password_policy=0;
mysql> set global validate_password_length=4;
4)设置简单好记的密码
mysql> set password=password("123456");
5)进入MySQL库
mysql> use mysql
6)查询user表
mysql> select user, host from user;
7)修改user表,把Host表内容修改为%
mysql> update user set host="%" where user="root";
8)刷新
mysql> flush privileges;
9)退出
mysql> quit;
卸载MySQL说明
若因为安装失败或者其他原因,MySQL需要卸载重装,可参考以下内容。
-
清空原有数据
-
通过
/etc/my.cnf
查看MySQL数据的存储位置[lzl@hadoop12software]$ sudo cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql
-
去往
/var/lib/mysql
路径需要root权限[lzl@hadoop12mysql]$ su - root [root@hadoop12~]# cd /var/lib/mysql [root@hadoop12mysql]# rm -rf *
-
-
卸载MySQL相关包
-
查看安装过的MySQL相关包
1[lzl@hadoop12software]$ sudo rpm -qa | grep -i -E mysql
输出示例:
mysql-community-libs-5.7.16-1.el7.x86_64 mysql-community-client-5.7.16-1.el7.x86_64 mysql-community-common-5.7.16-1.el7.x86_64 mysql-community-libs-compat-5.7.16-1.el7.x86_64 mysql-community-server-5.7.16-1.el7.x86_64
-
一键卸载命令
[lzl@hadoop12software]$ rpm -qa | grep -i -E mysql\|mariadb | xargs -n1 sudo rpm -e --nodeps
-
配置Hive元数据存储到MySQL
配置元数据到MySQL
-
新建Hive元数据库
-
登录MySQL
[lzl@hadoop12software]$ mysql -uroot -p123456
-
创建Hive元数据库
mysql> create database metastore; mysql> quit;
-
-
将MySQL的JDBC驱动拷贝到Hive的lib目录下
[lzl@hadoop12software]$ cp /opt/software/mysql-connector-java-5.1.37.jar $HIVE_HOME/lib
-
在
$HIVE_HOME/conf
目录下新建hive-site.xml
文件[lzl@hadoop12software]$ vim $HIVE_HOME/conf/hive-site.xml
添加如下内容:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <!-- jdbc连接的URL --> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop12:3306/metastore?useSSL=false</value> </property> <!-- jdbc连接的Driver --> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <!-- jdbc连接的username --> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <!-- jdbc连接的password --> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <!-- Hive默认在HDFS的工作目录 --> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> </configuration>
-
初始化Hive元数据库(修改为采用MySQL存储元数据)
[lzl@hadoop12hive]$ bin/schematool -dbType mysql -initSchema -verbose
验证元数据是否配置成功
-
再次启动Hive
[lzl@hadoop12hive]$ bin/hive
-
使用Hive
hive> show databases; hive> show tables; hive> create table stu(id int, name string); hive> insert into stu values(1,"ss"); hive> select * from stu;
-
在Xshell窗口中开启另一个窗口开启Hive(两个窗口都可以操作Hive,没有出现异常)
hive> show databases; hive> show tables; hive> select * from stu;
查看MySQL中的元数据
-
登录MySQL
[lzl@hadoop12hive]$ mysql -uroot -p123456
-
查看元数据库
metastore
mysql> show databases; mysql> use metastore; mysql> show tables;
-
查看元数据库中存储的库信息
mysql> select * from DBS; +-------+-----------------------+-------------------------------------------+---------+------------+------------+-----------+ | DB_ID | DESC | DB_LOCATION_URI | NAME | OWNER_NAME | OWNER_TYPE | CTLG_NAME | +-------+-----------------------+-------------------------------------------+---------+------------+------------+-----------+ | 1 | Default Hive database | hdfs://hadoop12:8020/user/hive/warehouse | default | public | ROLE | hive | +-------+-----------------------+-------------------------------------------+---------+------------+------------+-----------+
-
查看元数据库中存储的表信息
mysql> select * from TBLS; +--------+-------------+-------+------------------+---------+------------+-----------+-------+----------+---------------+ | TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER | OWNER_TYPE | RETENTION | SD_ID | TBL_NAME | TBL_TYPE | +--------+-------------+-------+------------------+---------+------------+-----------+-------+----------+---------------+ | 1 | 1656318303 | 1 | 0 | lzl | USER | 0 | 1 | stu | MANAGED_TABLE | +--------+-------------+-------+------------------+---------+------------+-----------+-------+----------+---------------+
-
查看元数据库中存储的表中列相关信息
mysql> select * from COLUMNS_V2; +-------+----------+---------+------------+-------------+-------------+--------+ | CS_ID | CAT_NAME | DB_NAME | TABLE_NAME | COLUMN_NAME | COLUMN_TYPE | TBL_ID | +-------+----------+---------+------------+-------------+-------------+--------+ | 1 | hive | default | stu | id | int | 1 | | 2 | hive | default | stu | name | string | 1 | +-------+----------+---------+------------+-------------+-------------+--------+
-