= 0%, Cumulative CPU 0.89 sec
2013-08-05 17:22:12,685 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 0.89 sec
2013-08-05 17:22:13,691 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.61 sec
2013-08-05 17:22:14,696 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.61 sec
2013-08-05 17:22:15,704 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.61 sec
MapReduce Total cumulative CPU time: 2 seconds 610 msec
Ended Job = job_201307151509_15438
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 2.61 sec HDFS Read: 458 HDFS Write: 112 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 610 msec
OK
1 121212 030729
2 131313 030729
3 141414 030729
4 151515 030729
5 161616 030729
6 171717 030729
8 191919 030729
Time taken: 18.044 seconds
结果说明:严格模式下,sort by 不指定limit 数,可以正常执行。sort by 受hive.mapred.mode=sctrict 的影响较小。
---set mapred.reduce.tasks=2,使用sort by
hive> set mapred.reduce.tasks=2;
hive> insert overwrite local directory '/tmp/hivetest/sortby' select id,devid,job_time from tb_in_base where job_time=030729 sort by devid; //注:将查询结果写入到指定目录中
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 2 //注:说明有两个reducer
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201307151509_15466, Tracking URL = http://mwtec-50:50030/jobdetails.jsp jobid=job_201307151509_15466
Kill Command = /home/hadoop/hadoop-0.20.2/bin/hadoop job -Dmapred.job.tracker=mwtec-50:9002 -kill job_201307151509_15466
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 2
2013-08-05 18:03:48,298 Stage-1 map = 0%, reduce = 0%
2013-08-05 18:03:50,307 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
2013-08-05 18:03:51,312 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
2013-08-05 18:03:52,317 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
2013-08-05 18:03:53,322 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
2013-08-05 18:03:54,327 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
2013-08-05 18:03:55,333 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
2013-08-05 18:03:56,338 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
2013-08-05 18:03:57,343 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 1.33 sec
2013-08-05 18:03:58,351 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.57 sec
2013-08-05 18:03:59,356 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.57 sec
2013-08-05 18:04:00,362 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.57 sec
MapReduce Total cumulative CPU time: 4 seconds 570 msec
Ended Job = job_201307151509_15466
Copying data to local directory /tmp/hivetest/sortby
Copying data to local directory /tmp/hivetest/sortby
7 Rows loaded to /tmp/hivetest/sortby
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 2 Cumulative CPU: 4.57 sec HDFS Read: 458 HDFS Write: 112 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 570 msec
OK
Time taken: 16.712 seconds
查看/tmp/hivetest/sort下查询结果