设为首页 加入收藏

TOP

hive与hdfs整合过程
2018-11-13 14:56:27 】 浏览:205
Tags:hive hdfs 整合 过程
hive与hdfs整合过程---coco


# by coco
# 2014-07-25


hive的具体练习:(以下4个目标)
1. 第一普通的hdfs文件能导入到hive中,以供我们查询。
2. 第二hbase中的表,能导入hive中,以供我们查询。
3. 第三mysql中的表,能导入hive中,以供我们查询。
4. hive中的各种查询分析结果,能导入到mysql当中,以后页面展示。


本文是第一个目标:
第一普通的hdfs文件能导入到hive中,以供我们查询。同时,查询的结果能保存到一下3个地方:
1.将select的结果放到一个的的表格中(首先要用create table创建新的表格)
2.将select的结果放到本地文件系统中
3.将select的结果放到hdfs文件系统中




下面具体目标分别测试:
1. 普通的hdfs文件导入到hive中。
创建一个root目录下一个普通文件:
[root@db96 ~]# cat hello.txt
# This is a text txt
# by coco
# 2014-07-18
在hive中导入该文件:
hive> create table pokes(foo int,bar string);
OK
Time taken: 0.068 seconds
hive> load data local inpath '/root/hello.txt' overwrite into table pokes;
Copying data from file:/root/hello.txt
Copying file: file:/root/hello.txt
Loading data to table default.pokes
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted hdfs://db96:9000/user/hive/warehouse/pokes
Table default.pokes stats: [numFiles=1, numRows=0, totalSize=59, rawDataSize=0]
OK
Time taken: 0.683 seconds
hive> select * from pokes;
OK
NULL NULL
NULL NULL
NULL NULL
NULL NULL
NULL NULL
Time taken: 0.237 seconds, Fetched: 5 row(s)
hive>
hive> load data local inpath '/root/hello.txt' overwrite into table test;
Copying data from file:/root/hello.txt
Copying file: file:/root/hello.txt
Loading data to table default.test
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted hdfs://db96:9000/hive/warehousedir/test
Failed with exception Unable to alter table.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
hive> show tables;
OK
hivetest
pokes
test
test3
Time taken: 1.045 seconds, Fetched: 4 row(s)
hive> select * from test;
OK
# This is a text txt NULL
# by coco NULL
# 2014-07-18 NULL
NULL
hello world!! NULL
Time taken: 1.089 seconds, Fetched: 5 row(s)
从上面看导入成功,但是查询的都是null,那是因为没有加分隔.test表默认的有terminated by '\t'
lines terminated by '\n' 分隔符,所以尽管有报错,数据也是插入的。
正确的导入语法为:
create table aaa(time string,myname string,yourname string) row format delimited
fields terminated by '\t' lines terminated by '\n' stored as textfile
hive> create table aaa(time string,myname string,yourname string) row format delimited
> fields terminated by '\t' lines terminated by '\n' stored as textfile;
OK
Time taken: 1.011 seconds
hive> load data local inpath '/root/aaaa.txt' overwrite
> into table aaa;
Copying data from file:/root/aaaa.txt
Copying file: file:/root/aaaa.txt
Loading data to table default.aaa
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted hdfs://db96:9000/hive/warehousedir/aaa
[Warning] could not update stats.
OK
Time taken: 2.686 seconds
hive> select * from aaa;
OK
20140723,yting,xmei NULL NULL
Time taken: 0.054 seconds, Fetched: 1 row(s)


2. 查询结果导出来。
从hive中把表中的数据导出来,保存成文本类型。
先检索索要的结果:
hive> select time from aaa;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1405999790746_0002, Tracking URL = http://db96:8088/proxy/application_1405999790746_0002/
Kill Command = /usr/local/hadoop//bin/hadoop job -kill job_1405999790746_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-07-23 16:28:51,690 Stage-1 map = 0%, reduce = 0%
2014-07-23 16:29:02,457 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
MapReduce Total cumulative CPU time: 1 seconds 330 msec
Ended Job = job_1405999790746_0002
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.33 sec HDFS Read: 221 HDFS Write: 20 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 330 msec
OK
20140723,yting,xmei
Time taken: 26.281 seconds, Fetched: 1 row(s)
将查询结果输出至本地目录
hive> insert overwrite local directory '/tmp' select a.time from aaa a;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1405999790746_0004, Tracking URL = http://db96:8088/proxy/application_1405999790746_0004/
Kill Command = /usr/local/hadoop//bin/hadoop job -kill job_1405999790746_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-07-23 16:34:28,474 Stage-1 map = 0%, reduce = 0%
2014-07-23 16:34:35,128 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.27 sec
MapReduce Total cumulative CPU time: 1 seconds 270 msec
Ended Job = job_1405999790746_0004
Copying data to local directory /tmp
Copying data to local directory /tmp
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.27 sec HDFS Read: 221 HDFS Write: 20 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 270 msec
OK
Time taken: 21.943 seconds
可以看到/tmp下确实有一个文件,000000_0,该文件的内容为,我们查询看到的内容。
root@db96 tmp]# ll
总用量 4
-rw-r--r-- 1 root root 20 7月 23 16:34 000000_0
[root@db96 tmp]# vim 000000_0


20140723,yting,xmei
~
很多时候,我们在hive中执行select语句,希望将最终的结果保存到本地文件或者保存到hdfs系统中
或者保存到一个新的表中,hive提供了方便的关键词,来实现上面所述的功能。
1.将select的结果放到一个的的表格中(首先要用create table创建新的表格)
insert overwrite table test select uid,name from test2;
2.将select的结果放到本地文件系统中
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/reg_3' SELECT a.* FROM events a;
3.将select的结果放到hdfs文件系统中
INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT a.* FROM invites a WHERE a.ds='<DATE>';


以上,我们实现了把普通本地的文本文件导入到hive中,并能实现相关的查询,并把查询结果导出到3个不同的地方。


具体示例:
hive> select a.time from aaa a;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1405999790746_0005, Tracking URL = http://db96:8088/proxy/application_1405999790746_0005/
Kill Command = /usr/local/hadoop//bin/hadoop job -kill job_1405999790746_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-07-23 16:47:42,295 Stage-1 map = 0%, reduce = 0%
2014-07-23 16:47:49,567 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.19 sec
MapReduce Total cumulative CPU time: 1 seconds 190 msec
Ended Job = job_1405999790746_0005
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.19 sec HDFS Read: 221 HDFS Write: 20 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 190 msec
OK
a.time
20140723,yting,xmei
Time taken: 21.155 seconds, Fetched: 1 row(s)
hive> create table jieguo(content string);
OK
Time taken: 2.424 seconds
hive> insert overwrite table jieguo
> select a.time from aaa a;
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1405999790746_0006, Tracking URL = http://db96:8088/proxy/application_1405999790746_0006/
Kill Command = /usr/local/hadoop//bin/hadoop job -kill job_1405999790746_0006
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-07-23 16:49:50,689 Stage-1 map = 0%, reduce = 0%
2014-07-23 16:49:57,329 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.3 sec
MapReduce Total cumulative CPU time: 1 seconds 300 msec
Ended Job = job_1405999790746_0006
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://db96:9000/hive/scratchdir/hive_2014-07-23_16-49-36_884_4745480606977792448-1/-ext-10000
Loading data to table default.jieguo
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted hdfs://db96:9000/hive/warehousedir/jieguo
Failed with exception Unable to alter table.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.3 sec HDFS Read: 221 HDFS Write: 90 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 300 msec
hive> show tables;
OK
tab_name
aaa
hello
hivetest
jieguo
pokes
test
test3
Time taken: 1.03 seconds, Fetched: 7 row(s)
hive> select * from jieguo;
OK
jieguo.content
20140723,yting,xmei
Time taken: 1.043 seconds, Fetched: 1 row(s)
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Hive之 hive的三种使用方式(CLI.. 下一篇beeline通过HiveServer2访问Hive..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目