设为首页 加入收藏

TOP

Hadoop2.8.1执行HDFS命令-appendToFile时报错
2019-03-27 12:13:27 】 浏览:277
Tags:Hadoop2.8.1 执行 HDFS 命令 -appendToFile 时报

报错如下:
WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.33.102:50010,DS-e11bb105-fbb8-4b44-8759-cac614ddceb0,DISK], DatanodeInfoWithStorage[192.168.33.103:50010,DS-721ca7d3-d6a2-4a95-8156-dcbc3bbf7316,DISK]], original=[DatanodeInfoWithStorage[192.168.33.103:50010,DS-721ca7d3-d6a2-4a95-8156-dcbc3bbf7316,DISK], DatanodeInfoWithStorage[192.168.33.102:50010,DS-e11bb105-fbb8-4b44-8759-cac614ddceb0,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration.atorg.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1228)at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1298) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1473) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1387) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:662)

解决方法:
修改hdfs-site.xml文件,添加或者修改如下两项:
< property>
< name>
dfs.client.block.write.replace-datanode-on-failure.enable
</ name>
< value>true</ value>
</ property>
< property>
< name>
dfs.client.block.write.replace-datanode-on-failure.policy
</ name>
< value>never</ value>
</ property>
添加或修改完成后,重试即可。

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇hdfs单机版的安装 下一篇使用 intellij 远程调试 hdfs &n..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目