// Timeouts for communicating with DataNode for streaming writes/reads
public static int READ_TIMEOUT =60
* 1000; //其实是超过了这个值
public static int READ_TIMEOUT_EXTENSION = 3 * 1000;
public static int WRITE_TIMEOUT = 8 * 60 * 1000;
public static int WRITE_TIMEOUT_EXTENSION = 5 * 1000; //for write pipeline
日志:
11/10/12 10:50:44 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block blk_8540857362443890085_4343699470java.net.SocketTimeoutException: 66000 millis timeout while waiting for channel to be ready for
read. ch : java.nio.channels.SocketChannel[connected local=/172.*.*.*:14707 remote=/*.*.*.24:80010]
原因及修改方法:
所以找出来是超时导致的,所以在hadoop-site.xml配置文件中添加如下配置:
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>3000000</value>
</property>
<property>
<name>dfs.socket.timeout</name>
<value>3000000</value>
</property>
</configuration>
Workaround 1: Start from scratch
I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to:
1.stop the cluster
2.delete the data directory on
the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3.reformat the namenode (NOTE:
all HDFS data is lost during this process!)
4.restart the cluster
When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.
Workaround 2: Updating namespaceID of problematic datanodes
Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as
you only have to edit one file on the problematic datanodes:
1.stop the datanode
2.edit the value of namespaceID
in <dfs.data.dir>/current/VERSION to match the value of the current namenode
3.restart the datanode
If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by
default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).
If you wonder how the contents of VERSION look like, here's one of mine:
#contents of <dfs.data.dir>/current/VERSION
namespaceID=393514426
storageID=DS-1706792599-10.10.10.1-50010-1204306713481
cTime=1215607609074
storageType=DATA_NODE
layoutVersion=-13