Vi. installation and configuration Hive (the sixth operation)

MySQL installation Download MySQL server from the official website (yum installation) wget http://dev.mysql.com/get/mysql-community-release-el7-5.noa...
MySQL installation

Download MySQL server from the official website (yum installation)

wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm

If wget is not available, download and install wget:

yum -y install wget

decompression

rpm -ivh mysql-community-release-el7-5.noarch.rpm

install

yum install mysql-community-server

Restart mysql service:

service mysqld restart

Enter mysql:

mysql -u root

Set password root for root user:

mysql> set password for 'root'@'localhost' =password('root')

Remote connection settings:
Assign all permissions of all tables in all databases to the root user at all IP addresses:

mysql> grant all privileges on *.* to root@' %'identified by 'root'

Refresh permission:

mysql>flush privileges

If it is a new user instead of root, create a new user first:

mysql>create user 'username'@' %' identified by 'password'

You need to clear MySQL thoroughly before reinstalling mysql.
Check if mysql is installed:

rpm -qa |grep -i mysql

Uninstall each mysql installation package in turn:

rpm -e --nodeps installation package name

To view the remaining mysql directories or files:

find / -name mysql , whereis mysql

Delete the queried directories in turn:

rm -rf directory name

Delete mysql profile:

/usr/my.cnf, /root/.mysql_sercret.
Hive installation and configuration

Download hive-2.3.5 through wget

wget http://mirror.bit.edu.cn/apache/hive/hive-2.3.5/apache-hive-2.3.5-bin.tar.gz

decompression

tar -zxvf apache-hive-2.3.5-bin.tar.gz -C

Renamed hive

mv apache-hive-2.3.5-bin hive

Modify environment variables
vi /etc/profile

export HIVE_HOME=/usr/local/hive export PATH=$PATH:$HIVE_HOME/bin

The Source command updates the etc/profile file to take effect.

Configure vi hive-env.sh
Rename
Modify the installation path of Hadoop
Modify the path of Hive's conf directory

cp hive-env.sh.template hive-env.sh HADOOP_HOME=/opt/module /hadoop-2.7.7 export HIVE_CONF_DIR=/usr/local/hive/conf

Configure vi hive-site.xml
Rename

cp hive- default.xml.template hive-site.xml

Add attribute

<property> <!-- Link address of metabase mysql --> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://bigdata131:3306/hivedb?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <!-- Appoint mysql drive --> <!-- mysql5 The driving force is com.mysql.jdbc.Driver,mysql6 After that is com.mysql.cj.jdbc.Driver. --> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <!-- Appoint mysql User name --> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <!-- Appoint mysql Please enter your own password MySQL Connection password --> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> <description>password to use against metastore database</description> </property>
Start hive

Start Hadoop:

start-all.sh

Initialize Metastore schema:

schematool -dbType mysql -initSchema

Start Hive:

hive

Hive > Enter hive shell

Hive application example: wordcount

Create the data source file and upload it to the / user/input directory of hdfs
Create data source table t1:

create table t1 (line string)

Load data:

load data inpath '/user/input' overwrite into table t1

Write HiveQL statement to realize wordcount algorithm, and build table wct1 to save calculation results:

create table wct1 as select word, count(1) as count from (select explode (split (line, ' ')) as word from t1) w group by word order by word

To view the wordcount calculation results:

select * from wct1

4 November 2019, 13:46 | Views: 2170

Add new comment

For adding a comment, please log in
or create account

0 comments