设为首页 加入收藏

TOP

通过NFSv3挂载HDFS到本地目录 -- 2续hdfs-nfs网关解决错误
2018-11-24 08:22:19 】 浏览:128
Tags:通过 NFSv3 挂载 HDFS 本地 目录 hdfs-nfs 网关 解决 错误

通过NFSv3挂载HDFS到本地目录 -- 2续hdfs-nfs网关解决错误


4.8 总结 - 主要命令列表

基本顺序如下:
cd /home/hdfs/hadoop-2.7.1
sbin/stop-dfs.sh
service nfs stop
service rpcbind stop
sbin/hadoop-daemon.sh --script /home/hdfs/hadoop-2.7.1/bin/hdfs start portmap
sbin/hadoop-daemon.sh --script /home/hdfs/hadoop-2.7.1/bin/hdfs start nfs3
rpcinfo -p ip-172-30-0-129
showmount -e ip-172-30-0-129
sbin/start-dfs.sh
mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync ip-172-30-0-129:/ /mnt/hdfs




4.9 主要错误列表
错误1:
[root@ip-172-30-0-129 hadoop-2.7.1]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync ip-172-30-0-129:/ /mnt/hdfs
mount.nfs: mounting ip-172-30-0-129:/ failed, reason given by server: No such file or directory
没有开启hdfs服务


错误2:
[root@ip-172-30-0-129 hadoop-2.7.1]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync ip-172-30-0-129:/ /mnt/hdfs
mount.nfs: mount system call failed
可能portmap和nfs网关都成功启动了,但是,无法链接hdfs服务,可能原因权限,或者配置错误。


错误3:
[root@ip-172-30-0-129 hadoop-2.7.1]# mount -t nfs -o proto=tcp,nolock,noacl,sync 172.30.0.129:/ /mnt/hdfs
mount.nfs: access denied by server while mounting 172.30.0.129:/
配置,在两个xml文件中,不要配置错了。


错误4:
[root@ip-172-30-0-129 hadoop-2.7.1]# showmount -e 172.30.0.129
clnt_create: RPC: Program not registered


错误5:
如果检查 hadoop-***-nfs3-***.log,看到类似的提示:
2016-01-23 04:27:16,788 ERROR org.apache.hadoop.hdfs.nfs.mount.RpcProgramMountd: Can't get handle for export:/
java.net.ConnectException: Call From ip-172-30-0-129/172.30.0.129 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
表明,启动nfs网关的时候,应该切换到配置中设置的用户,(非root)。


需要检查的其他情况:
a.) selinux, 应该关闭: sestatus -v
b.) 防火墙:应确保相应的端口打开:service iptables status
c.) nfs配置,/etc/sysconfig/nfs,检查是否允许版本3和版本4


参考文献
a. http://hortonworks.com/community/forums/topic/nfs-to-hdfs-gateway-error-user-hdfs-is-not-allowed-to-impersonate-root/
b. http://stackoverflow.com/questions/25073792/error-e0902-exception-occured-user-root-is-not-allowed-to-impersonate-root
c. http://bbs.csdn.net/topics/391861078
d. http://duguyiren3476.iteye.com/blog/2209242


4.10 极简配置
core-site.xml文件中:
<property>
<name>hadoop.proxyuser.nfsserver.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.nfsserver.hosts</name>
<value>*</value>
</property>

hdfs-site.xml文件中:
<property>
<name>nfs.dump.dir</name>
<value>/tmp/.hdfs-nfs</value>
</property>
<property>
<name>nfs.rtmax</name>
<value>1048576</value>
<description>This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's rsize(add rsize= # of bytes to the mount directive).</description>
</property>
<property>
<name>nfs.wtmax</name>
<value>65536</value>
<description>This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's wsize(add wsize= # of bytes to the mount directive).</description>
</property>
<property>
<name>nfs.exports.allowed.hosts</name>
<value>* rw</value>
<description>允许所有主机对文件有rw权限</description>
</property>

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇ambari系列--报错问题2 下一篇hadoop rename

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目