设为首页 加入收藏

TOP

有关HDFS程序设计:HDFS Shell命令行常见操作
2019-02-18 00:19:45 】 浏览:157
Tags:有关 HDFS 程序设计 Shell 命令 常见 操作

/usr/local/hadoop/etc/hadoop 目录下:

帮助相关命令

1.hdfs dfs 可以显示hdfs常用命令

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs 
Usage: hadoop fs [generic options]
	[-appendToFile <localsrc> ... <dst>]
	[-cat [-ignoreCrc] <src> ...]
	[-checksum <src> ...]
	[-chgrp [-R] GROUP PATH...]
	[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
	[-chown [-R] [OWNER][:[GROUP]] PATH...]
	[-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
	[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
	[-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
	[-createSnapshot <snapshotDir> [<snapshotName>]]
	[-deleteSnapshot <snapshotDir> <snapshotName>]
	[-df [-h] [<path> ...]]
	[-du [-s] [-h] [-v] [-x] <path> ...]
	[-expunge]
	[-find <path> ... <expression> ...]
	[-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-getfacl [-R] <path>]
	[-getfattr [-R] {-n name | -d} [-e en] <path>]
	[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
	[-help [cmd ...]]
	[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
	[-mkdir [-p] <path> ...]
	[-moveFromLocal <localsrc> ... <dst>]
	[-moveToLocal <src> <localdst>]
	[-mv <src> ... <dst>]
	[-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
	[-renameSnapshot <snapshotDir> <oldName> <newName>]
	[-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
	[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
	[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
	[-setfattr {-n name [-v value] | -x name} <path>]
	[-setrep [-R] [-w] <rep> <path> ...]
	[-stat [format] <path> ...]
	[-tail [-f] <file>]
	[-test -[defsz] <path>]
	[-text [-ignoreCrc] <src> ...]
	[-touchz <path> ...]
	[-truncate [-w] <length> <path> ...]
	[-usage [cmd ...]]

Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

2.usage 查看命令的用法,例如,查看ls的用法

hdfs dfs -usage ls

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -usage ls
Usage: hadoop fs [generic options] -ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]

3.help 查看命名的详细帮助,例如查看ls命令的详细帮助

hdfs dfs -help ls

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -help ls
-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...] :
  List the contents that match the specified file pattern. If path is not
  specified, the contents of /user/<currentUser> will be listed. For a directory a
  list of its direct children is returned (unless -d option is specified).
  
  Directory entries are of the form:
  	permissions - userId groupId sizeOfDirectory(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) directoryName
  
  and file entries are of the form:
  	permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) fileName
  
    -C  Display the paths of files and directories only.
    -d  Directories are listed as plain files.
    -h  Formats the sizes of files in a human-readable fashion
        rather than a number of bytes.
    -q  Print  instead of non-printable characters.
    -R  Recursively list the contents of directories.
    -t  Sort files by modification time (most recent first).
    -S  Sort files by size.
    -r  Reverse the order of the sort.
    -u  Use time of last access instead of modification for
        display and sorting.
    -e  Display the erasure coding policy of files and directories.

查看相关命令

4.ls 查看文件或者目录

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -ls /
Found 21 items
-rw-r--r--   2 hadoop supergroup         39 2018-03-21 17:17 /aaaa
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 17:24 /ffk
-rw-r--r--   2 hadoop supergroup        312 2018-03-21 17:10 /hhh
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 09:31 /hltest
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 10:09 /hltestout
drwxr-xr-x   - hadoop supergroup          0 2018-03-21 17:15 /home
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:01 /input
drwxr-xr-x   - hadoop supergroup          0 2018-03-08 22:26 /ning
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:17 /output
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:22 /output2
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:24 /output3
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:26 /output4
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:27 /output5
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 18:34 /result
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 21:51 /test
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 20:48 /testxuning
drwx------   - hadoop supergroup          0 2018-03-07 19:00 /tmp
drwxr-xr-x   - hadoop supergroup          0 2018-03-09 21:04 /xu
drwxr-xr-x   - hadoop supergroup          0 2018-03-10 21:45 /xun
drwxr-xr-x   - hadoop supergroup          0 2018-03-10 21:50 /xunin
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 18:33 /xuning

选项-r可连同其子文件目录一起列出:hdfs dfs -ls -r/

5.cat 查看文件内容

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -cat /aaaa
/home/hadoop/eclipse/committers-oxygen

文件及目录相关命令

6.touchz创建一个空文件夹,如果存在指定名称的非空文件夹,则返回错误

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -touchz /mal
hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -ls /
Found 22 items
-rw-r--r--   2 hadoop supergroup         39 2018-03-21 17:17 /aaaa
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 17:24 /ffk
-rw-r--r--   2 hadoop supergroup        312 2018-03-21 17:10 /hhh
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 09:31 /hltest
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 10:09 /hltestout
drwxr-xr-x   - hadoop supergroup          0 2018-03-21 17:15 /home
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:01 /input
-rw-r--r--   2 hadoop supergroup          0 2018-03-21 19:38 /mal

7.appendtofile 向现有文件中追加内容

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -cat /mal
hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -appendToFile /home/hadoop/桌面/aaaa /mal
hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -cat /mal
/home/hadoop/eclipse/committers-oxygen

8.put从本地文件系统上传文件到HDFS

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -put /home/hadoop/桌面/325 /shiyan
hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -ls /
Found 23 items
-rw-r--r--   2 hadoop supergroup         39 2018-03-21 17:17 /aaaa
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 17:24 /ffk
-rw-r--r--   2 hadoop supergroup        312 2018-03-21 17:10 /hhh
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 09:31 /hltest
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 10:09 /hltestout
drwxr-xr-x   - hadoop supergroup          0 2018-03-21 17:15 /home
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:01 /input
-rw-r--r--   2 hadoop supergroup         39 2018-03-25 15:38 /mal
drwxr-xr-x   - hadoop supergroup          0 2018-03-08 22:26 /ning
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:17 /output
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:22 /output2
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:24 /output3
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:26 /output4
drwxr-xr-x   - hadoop supergroup          0 2018-03-07 15:27 /output5
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 18:34 /result
-rw-r--r--   2 hadoop supergroup         39 2018-03-25 15:51 /shiyan
hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -cat /shiyan
/home/hadoop/eclipse/committers-oxygen

选项-f:如果文件已经存在,则覆盖已有文件

选项-p:保留原文件的访问和修改时间,用户和组,权限属性

9.getHDFS上下载文件到本地,与put不同,没有覆盖本地已有文件的选项

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -get /shiyan ~
hadoop@master:/usr/local/hadoop/etc/hadoop$ cd
hadoop@master:~$ ls
cqq                examples.desktop  hl               wordcount  模板  未命名文件夹  音乐
eclipse            fk2               mapred-site.xml  xuning     视频  文档          桌面
eclipse-workspace  hadoop            shiyan           公共的     图片  下载

10.getmerge:将指定的HDFS目录下的文件合并成一个文件并下载到本地,源文件保留

选项-nl:在每个文件的最后一行增加一新行

11.copyFromLocal

从本地文件系统上传到HDFS,与put命令相同

12.copyToLocal

HDFS下载文件到本地系统文件,与get命令相同

13.moveFromLocal

put命令相同,只是上传成功后本地文件会被删除

14.cp复制文件

选项-f如果文件已存在,覆盖已有文件

15.mkdir创建文件夹

选项-p如果上层目录不存在则递归建立所需目录

16.rm删除文件

选项-r,递归删除,可以删除非空目录

17.rmdir删除空目录

选项--ignore-fail-on-non-empety:忽略非空删除失败时的提示

(此时,用此命令来删除非空目录,没有错误信息提示,但是文件未被删除)

18.setrep改变一个文件的副本数

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -setrep 2 /aaaa
Replication 2 set: /aaaa

选项-w:命令等待副本数调整完成

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -setrep -w 2 /aaaa
Replication 2 set: /aaaa
Waiting for /aaaa ... done

19.expunge清空回收站

20.chgrp修改文件用户组

[hadoop@localhost hadoop-2.5.2]$ hdfs dfs -chgrp test /output
[hadoop@localhost hadoop-2.5.2]$ hdfs dfs -ls -R /
drwxr-xr-x   - hadoop supergroup          0 2015-03-27 19:19 /input
-rw-r--r--   1 hadoop supergroup         14 2015-03-27 19:19 /input/input1.txt
-rw-r--r--   1 hadoop supergroup         32 2015-03-27 19:19 /input/input2.txt
-rw-r--r--   1 hadoop supergroup          0 2015-04-02 08:43 /input.zip
-rw-r--r--   1 hadoop supergroup        184 2015-03-31 08:14 /input1.zip
drwxr-xr-x   - hadoop test                0 2015-04-02 08:34 /output                     --修改后的用户组(未建立test组,仍可成功)
-rwxrwxrwx   1 hadoop hadoops            28 2015-03-31 08:59 /output/input1.txt    --目录下文件的用户组未修改

选项-R:递归修改,如果是目录,则递归的修改其下的文件和目录

21.chmod修改文件权限

补充文件权限知识:

d:表示一个目录(目录也是一个特殊的文件)

-:表示这个是普通文件

rread,读取):读取文件内容;浏览目录

wwrite,写入):新增、修改文件内容;删除、移动目录内文件

xexecute,执行):执行文件;进入目录

-:表示不具备该项权限

rwx看成二进制,如果有则用1表示,没有则用0表示,如:rwx r-x r--即可以表示为111 101 100将其每三位转换成754

-rwx------:等于数字表示700
-rwxr―r--:等于数字表示744
-rw-rw-r-x:等于数字表示665
drwx―x―x:等于数字表示711

drwx------:等于数字表示700

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -ls /
Found 23 items
-rw-r--r--   2 hadoop supergroup         39 2018-03-21 17:17 /aaaa
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 17:24 /ffk
-rw-r--r--   2 hadoop supergroup        312 2018-03-21 17:10 /hhh
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 09:31 /hltest
hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -chmod 754 /hltest
hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -ls /
Found 23 items
-rw-r--r--   2 hadoop supergroup         39 2018-03-21 17:17 /aaaa
drwxr-xr-x   - hadoop supergroup          0 2018-03-11 17:24 /ffk
-rw-r--r--   2 hadoop supergroup        312 2018-03-21 17:10 /hhh
drwxr-xr--   - hadoop supergroup          0 2018-03-11 09:31 /hltest

22.getfacl显示访问控制列表(ACL)

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -getfacl /hltest
# file: /hltest
# owner: hadoop
# group: supergroup
user::rwx
group::r-x
other::r--

选项-R:递归显示

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -getfacl -R /hltest
# file: /hltest
# owner: hadoop
# group: supergroup
user::rwx
group::r-x
other::r--

# file: /hltest/file1
# owner: hadoop
# group: supergroup
user::rw-
group::r--
other::r--

# file: /hltest/file2
# owner: hadoop
# group: supergroup
user::rw-
group::r--
other::r--

23.setfacl设置访问控制列表

统计相关命令

24.count:显示指定文件或目录的:DIR_COUNT、FILE_COUNT、CONTENT_SIZE、FILE_NAME,分别表示子目录个数(如果指定路径是目录,则包含目录本身)、文件个数、使用字节数,以及文件或目录名。

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -count /
          75         2008           37952956 /

25.du显示文件大小,如果指定目录,则会显示该目录中每个文件的大小

39        78        /aaaa
0         0         /ffk
312       624       /hhh
140       280       /hltest
0         0         /hltestout
34929793  69859586  /home
87        174       /input
39        78        /mal
207       414       /ning
93        186       /output
0         0         /output2
93        186       /output3
0         0         /output4
93        186       /output5
2004      4008      /result
39        78        /shiyan
2314      4628      /test
96        192       /testxuning
3017342   6244108   /tmp
92        184       /xu
85        170       /xun
0         0         /xunin
88        176       /xuning

26.df检查文件系统的磁盘空间占用情况

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -df /
Filesystem                  Size       Used     Available  Use%
hdfs://master:9000  820654354432  103129088  753473880064    0%

27.stat显示文件统计信息

格式:%b-文件所占块数;%g-文件所属的用户组;%n-文件名;%o-文件块大小;%r-备份数;%u-文件所属用户;%y-文件修改时间

hadoop@master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -stat %b,%g,%n,%o,%r,,%u,%y /hltest
0,supergroup,hltest,0,0,,hadoop,2018-03-11 01:31:47

以上命令基本全部进行实现,少数不常用命令未曾列举,仅供个人以后学习参考。

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Hadoop-->HDFS原理总结 下一篇java hadoop   hdfs 上写文..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目