ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • hive with mysql 설치
    BigData/Hive 2013. 12. 10. 21:52

    HIVE with MySql


    hive 다운로드및 설치

    $ su - hadoop

    $ wget http://apache.mirror.cdnetworks.com/hive/hive-0.12.0/hive-0.12.0-bin.tar.gz


    압축해제 및 hadoop 이있는 디렉토리로 이동 (복사 위치는 본인이 원하는곳에 하면됨)

    $ tar xvzf hive-0.12.0*.tar.gz && cp -R hive-0.12.0-bin /usr/local/hadoop/hive


    profile에 hive_home 과 PATH 처리

    $ vi .profile


    추가 내용 

    # hive home

    export HIVE_HOME=/usr/local/hadoop/hive


    # path 

    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin


    .profile reload

    $ source ~/.profile


    구동테스트

    - 각자의 하둡 실행

    $ $HADOOP_HOME/bin/hadoop dfs -mkdir /tmp

    $ $HADOOP_HOME/bin/hadoop dfs -mkdir /home /home/hive /home/hive/warehouse

    $ $HADOOP_HOME/bin/hadoop dfs -chmod g+w   /tmp

    $HADOOP_HOME/bin/hadoop dfs -chmod g+w   /home /home/hive /home/hive/warehouse


    hive 실행

    $ hive

    hive> show databases;

    OK

    default

    Time taken: 0.056 seconds, Fetched: 1 row(s)

    hive>  set -v;


    이렇게 나오면 하이브까지는 성공.



    mysql 설치

    $ sudo apt-get update && sudo apt-get install mysql-server


    mysql 사용자 등록

    $ mysql -u root -p

    mysql> grant all privileges on *.* to hive@localhost identified by 'hive' with grant option;  

    mysql> exit;


    $ mysql -u hive -p

    mysql> create database hive;

    Query OK, 1 row affected (0.00 sec)


    mysql> use hive;

    Database changed




    hive 에 사용할 mysql lib 다운로드 

    $ wget http://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.27.tar.gz && tar xvzf mysql*.gz


    HIVE/lib로 복사

    $ cp mysql-connector-java-5.1.27/*.jar $HIVE_HOME/lib




    HIVE에서  mysql 연결 설정

    hive-site.xml

    $ vi $HIVE_HOME/conf/hive-site.xml


    아래 내용 삽입

    <configuration>

       <property>

         <name>hive.metastore.local</name>

         <value>true</value>

       </property>


       <property>

         <name>javax.jdo.option.ConnectionURL</name>

         <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>

       </property>


       <property>

         <name>javax.jdo.option.ConnectionDriverName</name>

         <value>com.mysql.jdbc.Driver</value>

       </property>


       <property>

         <name>javax.jdo.option.ConnectionUserName</name>

         <value>hive</value>

       </property>


       <property>

         <name>javax.jdo.option.ConnectionPassword</name>

         <value>hive</value>

       </property>

    </configuration>




    최종 테스트

    $ hive

    hive> create table hive01(seq int, crt_dt date);

    OK

    Time taken: 0.031 seconds

    hive> show tables;

    OK

    hive01

    Time taken: 0.02 seconds, Fetched: 1 row(s)

    hive> select * from hive01;

    OK

    Time taken: 0.188 seconds

    hive> 



    저장되었는지 확인 방법

    hadoop@sidcode-worlds:~$ mysql -u hive -p
    Enter password: 
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A

    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 60
    Server version: 5.5.34-0ubuntu0.13.10.1 (Ubuntu)

    Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

    mysql> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | hive               |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    5 rows in set (0.00 sec)

    mysql> use hive;
    Database changed
    mysql> show tables;
    +---------------------------+
    | Tables_in_hive            |
    +---------------------------+
    | BUCKETING_COLS            |
    | CDS                       |
    | COLUMNS_V2                |
    | DATABASE_PARAMS           |
    | DBS                       |
    | IDXS                      |
    | INDEX_PARAMS              |
    | PARTITIONS                |
    | PARTITION_KEYS            |
    | PARTITION_KEY_VALS        |
    | PARTITION_PARAMS          |
    | PART_COL_PRIVS            |
    | PART_PRIVS                |
    | SDS                       |
    | SD_PARAMS                 |
    | SEQUENCE_TABLE            |
    | SERDES                    |
    | SERDE_PARAMS              |
    | SKEWED_COL_NAMES          |
    | SKEWED_COL_VALUE_LOC_MAP  |
    | SKEWED_STRING_LIST        |
    | SKEWED_STRING_LIST_VALUES |
    | SKEWED_VALUES             |
    | SORT_COLS                 |
    | TABLE_PARAMS              |
    | TAB_COL_STATS             |
    | TBLS                      |
    | TBL_COL_PRIVS             |
    | TBL_PRIVS                 |
    | VERSION                   |
    +---------------------------+
    30 rows in set (0.00 sec)

    mysql> SELECT * FROM hive.TBLS;
    +--------+-------------+-------+------------------+--------+-----------+-------+----------+---------------+--------------------+--------------------+
    | TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER  | RETENTION | SD_ID | TBL_NAME | TBL_TYPE      | VIEW_EXPANDED_TEXT | VIEW_ORIGINAL_TEXT |
    +--------+-------------+-------+------------------+--------+-----------+-------+----------+---------------+--------------------+--------------------+
    |      2 |  1386679369 |     1 |                0 | hadoop |         0 |     2 | hive01   | MANAGED_TABLE | NULL               | NULL               |
    +--------+-------------+-------+------------------+--------+-----------+-------+----------+---------------+--------------------+--------------------+
    1 row in set (0.00 sec)

    mysql> 

    빨간색 표시 참고 : 
    owner : hadoop, 
    tbl_name : hvie01

    좀전에 hive에서 만든 테이블이 제대로 저장되었음을 확인 할수있다. 그럼 끝!







    'BigData > Hive' 카테고리의 다른 글

    hive thrift java 예제  (0) 2013.12.14
Designed by Tistory.