设为首页 加入收藏

TOP

Hadoop上传文件到HDFS时异常处理步骤
2018-12-26 12:19:37 】 浏览:364
Tags:Hadoop 上传 文件 HDFS 异常 处理 步骤
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/wanyeye/article/details/12219085

Hadoop环境搭建主要参考如下两篇博客

参考如下:

http://blog.csdn.net/hitwengqi/article/details/8008203

http://www.cnblogs.com/tippoint/archive/2012/10/23/2735532.html

本人环境如下:

VM 9.0

Ubuntu 12.04

Hadoop 0.20.203.0

Eclipse helios-sr2-linux-gtk.xxx

Eclipse 工作目录:/home/hadoop/workspace

安装目录:/opt/eclipse

Hadoop 安装目录:/usr/local/hadoop

本人在参考两篇文章搭建时,基本上比较顺利,在操作过程中,也出现过一些错误或异常,在参考上面链接中的两篇描述,肯定是可以解决的。

但是,在做经典的wordcount时,参考的文章没有给出具体的连接,因为当时参考了很多博客文章,可以说没有一篇中提到的方法,能解决本人试验过程中出现的异常或错误。

遗憾的是,试验的过程中出现的很多错误或异常,本人没有具体记录错误或异常的描述,姑且概述为:“Hadoop上传文件到HDFS时异常”!

在下面的试验步骤中,本人力求完全展示操作步骤,所有的操作尽量原始,基本上是将操作步骤直接复制过来的,可能看上去有些乱,但是却是最真实的。

下面是操作的步骤,本人尽量详细的加以说明。

adoop@ubuntu:~$ ls

DesktopDownloads Music PublicVideos

Documentsexamples.desktop Pictures Templatesworkspace

hadoop@ubuntu:~$ cd /usr

hadoop@ubuntu:/usr$ cd local

hadoop@ubuntu:/usr/local$ ls

bingameshadoop-0.20.203.0rc1.tar.gzlib sbin src

etchadoop include man share

hadoop@ubuntu:/usr/local$ cd hadoop/

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocs LICENSE.txt

build.xmlhadoop-ant-0.20.203.0.jarlogs

c++hadoop-core-0.20.203.0.jarNOTICE.txt

CHANGES.txthadoop-examples-0.20.203.0.jarREADME.txt

confhadoop-test-0.20.203.0.jarsrc

contribhadoop-tools-0.20.203.0.jarwebapps

data1ivyword.txt

data2ivy.xmlword.txt~

datalog1lib

datalog2librecordio

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocs LICENSE.txt

build.xmlhadoop-ant-0.20.203.0.jarlogs

c++hadoop-core-0.20.203.0.jarNOTICE.txt

CHANGES.txthadoop-examples-0.20.203.0.jarREADME.txt

confhadoop-test-0.20.203.0.jarsrc

contribhadoop-tools-0.20.203.0.jar webapps

data1ivyword.txt

data2ivy.xmlword.txt~

datalog1lib

datalog2librecordio

hadoop@ubuntu:/usr/local/hadoop$ cd data1

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

currentdetach storage tmp

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 current

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf current

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

detachstorage tmp

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 tmp

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf tmp

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

detachstorage

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 detach

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf detach

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

storage

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 storage

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf storage

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

hadoop@ubuntu:/usr/local/hadoop/data1$ cd..

c: command not found

hadoop@ubuntu:/usr/local/hadoop/data1$ cd..

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocsLICENSE.txt

build.xmlhadoop-ant-0.20.203.0.jarlogs

c++hadoop-core-0.20.203.0.jarNOTICE.txt

CHANGES.txthadoop-examples-0.20.203.0.jarREADME.txt

confhadoop-test-0.20.203.0.jarsrc

contribhadoop-tools-0.20.203.0.jarwebapps

data1ivyword.txt

data2ivy.xmlword.txt~

datalog1lib

datalog2librecordio

hadoop@ubuntu:/usr/local/hadoop$ cd data2

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

currentdetach storage tmp

hadoop@ubuntu:/usr/local/hadoop/data2$chmod 777 current detach

hadoop@ubuntu:/usr/local/hadoop/data2$ rm-rf current detach

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

storagetmp

hadoop@ubuntu:/usr/local/hadoop/data2$chmod 777 storage tmp/

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

storagetmp

hadoop@ubuntu:/usr/local/hadoop/data2$ rm-rf storage tmp/

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

hadoop@ubuntu:/usr/local/hadoop/data2$ cd..

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocsLICENSE.txt

build.xmlhadoop-ant-0.20.203.0.jarlogs

c++hadoop-core-0.20.203.0.jarNOTICE.txt

CHANGES.txthadoop-examples-0.20.203.0.jarREADME.txt

confhadoop-test-0.20.203.0.jarsrc

contribhadoop-tools-0.20.203.0.jarwebapps

data1ivy word.txt

data2ivy.xmlword.txt~

datalog1lib

datalog2librecordio

hadoop@ubuntu:/usr/local/hadoop$ chmod 777datalog1

hadoop@ubuntu:/usr/local/hadoop$ rm -rfdatalog1

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocslibrecordio

build.xmlhadoop-ant-0.20.203.0.jarLICENSE.txt

c++hadoop-core-0.20.203.0.jarlogs

CHANGES.txthadoop-examples-0.20.203.0.jarNOTICE.txt

confhadoop-test-0.20.203.0.jarREADME.txt

contribhadoop-tools-0.20.203.0.jarsrc

data1ivywebapps

data2ivy.xmlword.txt

datalog2libword.txt~

hadoop@ubuntu:/usr/local/hadoop$ chmod 777datalog2

hadoop@ubuntu:/usr/local/hadoop$ rm -rfdatalog2/

hadoop@ubuntu:/usr/local/hadoop$ ls

bindata2ivy README.txt

build.xmldocsivy.xml src

c++hadoop-ant-0.20.203.0.jar lib webapps

CHANGES.txthadoop-core-0.20.203.0.jarlibrecordio word.txt

confhadoop-examples-0.20.203.0.jarLICENSE.txt word.txt~

contribhadoop-test-0.20.203.0.jarlogs

data1hadoop-tools-0.20.203.0.jarNOTICE.txt

hadoop@ubuntu:/usr/local/hadoop$ ls

bindata2ivy README.txt

build.xmldocsivy.xml src

c++hadoop-ant-0.20.203.0.jar lib webapps

CHANGES.txthadoop-core-0.20.203.0.jarlibrecordio word.txt

confhadoop-examples-0.20.203.0.jarLICENSE.txt word.txt~

contribhadoop-test-0.20.203.0.jarlogs

data1hadoop-tools-0.20.203.0.jarNOTICE.txt

很乱是吧?

上面所有的操作,目的就一个,为了重新格式化HDFS。因为本人在上传word.txt文件时,总是不成功。具体异常提示没有记录下来而为了重新格式化,要删除上一次生成的文件或目录,所以上面的所有操作都是为了删除文件或目录。操作步骤比较操蛋呵呵,新手哦所以的操作方法或者说命令都是一边“度娘”,一边操作的

上面的步骤完成后。

重新格式化:

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopnanmenode -format

Exception in thread "main"java.lang.NoClassDefFoundError: nanmenode

Caused by:java.lang.ClassNotFoundException: nanmenode

atjava.net.URLClassLoader$1.run(URLClassLoader.java:217)

atjava.security.AccessController.doPrivileged(Native Method)

atjava.net.URLClassLoader.findClass(URLClassLoader.java:205)

atjava.lang.ClassLoader.loadClass(ClassLoader.java:321)

atsun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)

atjava.lang.ClassLoader.loadClass(ClassLoader.java:266)

Could not find the main class: nanmenode.Program will exit.

hadoop@ubuntu:/usr/local/hadoop$ ls

bindata2ivy README.txt

build.xmldocsivy.xml src

c++hadoop-ant-0.20.203.0.jarlib webapps

CHANGES.txthadoop-core-0.20.203.0.jarlibrecordio word.txt

confhadoop-examples-0.20.203.0.jarLICENSE.txt word.txt~

contribhadoop-test-0.20.203.0.jarlogs

data1hadoop-tools-0.20.203.0.jarNOTICE.txt

hadoop@ubuntu:/usr/local/hadoop$ cd data1

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

hadoop@ubuntu:/usr/local/hadoop/data1$ cd..

hadoop@ubuntu:/usr/local/hadoop$ mkdirdatalog1

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocslibrecordio

build.xmlhadoop-ant-0.20.203.0.jarLICENSE.txt

c++hadoop-core-0.20.203.0.jarlogs

CHANGES.txthadoop-examples-0.20.203.0.jarNOTICE.txt

confhadoop-test-0.20.203.0.jarREADME.txt

contribhadoop-tools-0.20.203.0.jarsrc

data1ivywebapps

data2ivy.xmlword.txt

datalog1libword.txt~

hadoop@ubuntu:/usr/local/hadoop$ mkdir datalog2

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocsLICENSE.txt

build.xmlhadoop-ant-0.20.203.0.jarlogs

c++hadoop-core-0.20.203.0.jarNOTICE.txt

CHANGES.txthadoop-examples-0.20.203.0.jarREADME.txt

confhadoop-test-0.20.203.0.jarsrc

contribhadoop-tools-0.20.203.0.jarwebapps

data1ivyword.txt

data2ivy.xmlword.txt~

datalog1lib

datalog2librecordio

上面报了异常,原因是没有创建配置环境时,xml(具体哪个文件,参考开头的两篇博客描述)文件中的默认目录,新建目录后,继续格式化

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopnamenode -format

13/09/10 16:45:02 INFO namenode.NameNode:STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = ubuntu/127.0.1.1

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 0.20.203.0

STARTUP_MSG: build =http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203-r 1099333; compiled by 'oom' on Wed May4 07:57:50 PDT 2011

************************************************************/

Re-format filesystem in/usr/local/hadoop/datalog1 (Y or N) Y

Re-format filesystem in/usr/local/hadoop/datalog2 (Y or N) Y

13/09/10 16:45:07 INFO util.GSet: VMtype = 32-bit

13/09/10 16:45:07 INFO util.GSet: 2% maxmemory = 19.33375 MB

13/09/10 16:45:07 INFO util.GSet:capacity = 2^22 = 4194304 entries

13/09/10 16:45:07 INFO util.GSet:recommended=4194304, actual=4194304

13/09/10 16:45:08 INFO namenode.FSNamesystem:fsOwner=hadoop

13/09/10 16:45:08 INFOnamenode.FSNamesystem: supergroup=supergroup

13/09/10 16:45:08 INFOnamenode.FSNamesystem: isPermissionEnabled=true

13/09/10 16:45:08 INFOnamenode.FSNamesystem: dfs.block.invalidate.limit=100

13/09/10 16:45:08 INFOnamenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0min(s), accessTokenLifetime=0 min(s)

13/09/10 16:45:08 INFO namenode.NameNode:Caching file names occuring more than 10 times

13/09/10 16:45:08 INFO common.Storage:Image file of size 112 saved in 0 seconds.

13/09/10 16:45:08 INFO common.Storage:Storage directory /usr/local/hadoop/datalog1 has been successfully formatted.

13/09/10 16:45:08 INFO common.Storage:Image file of size 112 saved in 0 seconds.

13/09/10 16:45:08 INFO common.Storage:Storage directory /usr/local/hadoop/datalog2 has been successfully formatted.

13/09/10 16:45:08 INFO namenode.NameNode:SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode atubuntu/127.0.1.1

************************************************************/

格式完成后,接着启动

hadoop@ubuntu:/usr/local/hadoop$bin/start-all.sh

starting namenode, logging to/usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out

127.0.0.1: starting datanode, logging to/usr/local/hadoop/logs/hadoop-hadoop-datanode-ubuntu.out

127.0.0.1: starting secondarynamenode,logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out

starting jobtracker, logging to/usr/local/hadoop/logs/hadoop-hadoop-jobtracker-ubuntu.out

127.0.0.1: starting tasktracker, logging to/usr/local/hadoop/logs/hadoop-hadoop-tasktracker-ubuntu.out

查看是否所有进程都正常启动

hadoop@ubuntu:/usr/local/hadoop$ jps

3317 DataNode

3593 JobTracker

3521 SecondaryNameNode

3107 NameNode

3833 TaskTracker

3872 Jps

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopdfsadmin -report

Configured Capacity: 40668069888 (37.88 GB)

Present Capacity: 28074295311 (26.15 GB)

DFS Remaining: 28074246144 (26.15 GB)

DFS Used: 49167 (48.01 KB)

DFS Used%: 0%

Under replicated blocks: 1

Blocks with corrupt replicas: 0

Missing blocks: 0

-------------------------------------------------

Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010

Decommission Status : Normal

Configured Capacity: 40668069888 (37.88 GB)

DFS Used: 49167 (48.01 KB)

Non DFS Used: 12593774577 (11.73 GB)

DFS Remaining: 28074246144(26.15 GB)

DFS Used%: 0%

DFS Remaining%: 69.03%

Last contact: Tue Sep 10 16:46:29 PDT 2013

hadoop@ubuntu:/usr/local/hadoop$ ls

bindocsLICENSE.txt

build.xmlhadoop-ant-0.20.203.0.jarlogs

c++hadoop-core-0.20.203.0.jarNOTICE.txt

CHANGES.txthadoop-examples-0.20.203.0.jarREADME.txt

confhadoop-test-0.20.203.0.jarsrc

contribhadoop-tools-0.20.203.0.jarwebapps

data1ivyword.txt

data2ivy.xmlword.txt~

datalog1lib

datalog2librecordio

查看hdfs上的所有文件和目录

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopdfs -lsr /

drwxr-xr-x- hadoop supergroup 02013-09-10 16:45 /tmp

drwxr-xr-x- hadoop supergroup 02013-09-10 16:45 /tmp/hadoop-hadoop

drwxr-xr-x- hadoop supergroup 02013-09-10 16:45 /tmp/hadoop-hadoop/mapred

drwx------- hadoop supergroup 02013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system

-rw-------2 hadoop supergroup 42013-09-10 16:46 /tmp/hadoop-hadoop/

新建wordcount目录,并上传word.txt文件

这次正常了,没有抛出异常和错误。

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -mkdir /tmp/wordcount

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -put word.txt /tmp/wordcount/

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopdfs -lsr /

drwxr-xr-x- hadoop supergroup 02013-09-10 16:50 /tmp

drwxr-xr-x- hadoop supergroup 02013-09-10 16:45 /tmp/hadoop-hadoop

drwxr-xr-x- hadoop supergroup 02013-09-10 16:45 /tmp/hadoop-hadoop/mapred

drwx------- hadoop supergroup 02013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system

-rw-------2 hadoop supergroup 42013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

drwxr-xr-x- hadoop supergroup 02013-09-10 16:50 /tmp/wordcount

-rw-r--r--2 hadoop supergroup 832013-09-10 16:50 /tmp/wordcount/word.txt

查看word.txt文件中的内容:

第一次提示文件不存在,系文件后缀名错误

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -text /tmp/wordcount/word.text

text: File does not exist:/tmp/wordcount/word.text

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -text /tmp/wordcount/word.txt

java c++ python c

java c++ java script

helloword hadoop

mapreduce java hadoop hbase

hadoop@ubuntu:/usr/local/hadoop$

运行wordcount看效果:

用的hadoop自带的,具体方法如下

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopjar hadoop-examples-0.20.203.0.jar wordcount /tmp/wordcount/word.txt/tmp/outpu1

13/09/10 17:05:16 INFOinput.FileInputFormat: Total input paths to process : 1

13/09/10 17:05:16 INFO mapred.JobClient:Running job: job_201309101645_0002

13/09/10 17:05:17 INFOmapred.JobClient: map 0% reduce 0%

13/09/10 17:05:32 INFOmapred.JobClient: map 100% reduce 0%

13/09/10 17:05:43 INFOmapred.JobClient: map 100% reduce 100%

13/09/10 17:05:49 INFO mapred.JobClient:Job complete: job_201309101645_0002

13/09/10 17:05:49 INFO mapred.JobClient:Counters: 25

13/09/10 17:05:49 INFOmapred.JobClient: Job Counters

13/09/10 17:05:49 INFOmapred.JobClient: Launched reducetasks=1

13/09/10 17:05:49 INFOmapred.JobClient:SLOTS_MILLIS_MAPS=13099

13/09/10 17:05:49 INFOmapred.JobClient: Total time spent byall reduces waiting after reserving slots (ms)=0

13/09/10 17:05:49 INFOmapred.JobClient: Total time spent byall maps waiting after reserving slots (ms)=0

13/09/10 17:05:49 INFOmapred.JobClient: Launched map tasks=1

13/09/10 17:05:49 INFOmapred.JobClient: Data-local maptasks=1

13/09/10 17:05:49 INFOmapred.JobClient:SLOTS_MILLIS_REDUCES=10098

13/09/10 17:05:49 INFOmapred.JobClient: File Output FormatCounters

13/09/10 17:05:49 INFOmapred.JobClient: Bytes Written=80

13/09/10 17:05:49 INFOmapred.JobClient: FileSystemCounters

13/09/10 17:05:49 INFOmapred.JobClient: FILE_BYTES_READ=122

13/09/10 17:05:49 INFOmapred.JobClient: HDFS_BYTES_READ=192

13/09/10 17:05:49 INFOmapred.JobClient:FILE_BYTES_WRITTEN=42599

13/09/10 17:05:49 INFOmapred.JobClient:HDFS_BYTES_WRITTEN=80

13/09/10 17:05:49 INFOmapred.JobClient: File Input FormatCounters

13/09/10 17:05:49 INFOmapred.JobClient: Bytes Read=83

13/09/10 17:05:49 INFOmapred.JobClient: Map-Reduce Framework

13/09/10 17:05:49 INFOmapred.JobClient: Reduce inputgroups=9

13/09/10 17:05:49 INFOmapred.JobClient: Map outputmaterialized bytes=122

13/09/10 17:05:49 INFO mapred.JobClient: Combine output records=9

13/09/10 17:05:49 INFOmapred.JobClient: Map input records=4

13/09/10 17:05:49 INFOmapred.JobClient: Reduce shufflebytes=122

13/09/10 17:05:49 INFOmapred.JobClient: Reduce outputrecords=9

13/09/10 17:05:49 INFOmapred.JobClient: Spilled Records=18

13/09/10 17:05:49 INFOmapred.JobClient: Map outputbytes=135

13/09/10 17:05:49 INFOmapred.JobClient: Combine inputrecords=13

13/09/10 17:05:49 INFOmapred.JobClient: Map outputrecords=13

13/09/10 17:05:49 INFOmapred.JobClient: SPLIT_RAW_BYTES=109

13/09/10 17:05:49 INFOmapred.JobClient: Reduce inputrecords=9

hadoop@ubuntu:/usr/local/hadoop$

查看

浏览器中输入:http://localhost:50070, 回到主目录

查找一下,点击:part-r-00000

结果如下图所示:

命令行方式查看结果

hadoop@ubuntu:/usr/local/hadoop$

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -lsr /

drwxr-xr-x- hadoop supergroup 02013-09-10 17:05 /tmp

drwxr-xr-x- hadoop supergroup 02013-09-10 16:45 /tmp/hadoop-hadoop

drwxr-xr-x- hadoop supergroup 02013-09-10 17:04 /tmp/hadoop-hadoop/mapred

drwxr-xr-x- hadoop supergroup 02013-09-10 17:04 /tmp/hadoop-hadoop/mapred/staging

drwxr-xr-x- hadoop supergroup 02013-09-10 17:04 /tmp/hadoop-hadoop/mapred/staging/hadoop

drwx------- hadoop supergroup 02013-09-10 17:05 /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging

drwx------- hadoop supergroup 02013-09-10 17:04 /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201309101645_0001

-rw-r--r--10 hadoop supergroup 1424692013-09-10 17:04/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201309101645_0001/job.jar

drwx------- hadoop supergroup 02013-09-10 17:05 /tmp/hadoop-hadoop/mapred/system

-rw-------2 hadoop supergroup 42013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

drwxr-xr-x- hadoop supergroup 02013-09-10 17:05 /tmp/outpu1

-rw-r--r--2 hadoop supergroup 02013-09-10 17:05 /tmp/outpu1/_SUCCESS

drwxr-xr-x- hadoop supergroup 02013-09-10 17:05 /tmp/outpu1/_logs

drwxr-xr-x- hadoop supergroup 02013-09-10 17:05 /tmp/outpu1/_logs/history

-rw-r--r--2 hadoop supergroup 103962013-09-10 17:05 /tmp/outpu1/_logs/history/job_201309101645_0002_1378857916792_hadoop_word+count

-rw-r--r--2 hadoop supergroup 199692013-09-10 17:05 /tmp/outpu1/_logs/history/job_201309101645_0002_conf.xml

-rw-r--r--2 hadoop supergroup 802013-09-10 17:05 /tmp/outpu1/part-r-00000

drwxr-xr-x- hadoop supergroup 02013-09-10 16:50 /tmp/wordcount

-rw-r--r--2 hadoop supergroup 832013-09-10 16:50 /tmp/wordcount/word.txt

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -text /tmp/output1/part-r-00000

text: File does not exist: /tmp/output1/part-r-00000

//目录名敲错了

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -cat /tmp/outpu1/part-r-00000

c 1

c++ 2

hadoop 2

hbase 1

helloword 1

java 3

java script 1

mapreduce 1

python 1

hadoop@ubuntu:/usr/local/hadoop$



注:

hadoop 版本:20.203 自带的貌似有问题,jar文件打包不全。请自行下载,网上可以搜索到。


】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇SparkStreaming读kafka写入HDFS(.. 下一篇hdfs客户端上传文件追加出现的问..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目