TOP

dfsput操作报“There are 0 datanode(s) running and no node(s) are excluded in this operation”
2018-11-13 14:11:56 】 浏览:552
Tags:dfsput 操作 There are datanode running and node excluded this operation

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u010719917/article/details/73807147
. There are 0 datanode(s) running and no node(s) are excluded in this operation.
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
$bin/hdfs dfs -put etc/hadoop input 报如下错:
17/06/28 00:00:36 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/input/hadoop/capacity-scheduler.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)


vi /usr/src/hadoop/logs/hadoop-root-datanode-centos128.log 查看日志,发现clusterIDs不兼容。想到重建的原因。故想删除历史文件
2017-06-28 00:48:21,085 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-root/dfs/data/in_use.lock acquired by nodename 2533@centos128
2017-06-28 00:48:21,089 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-root/dfs/data/
java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-root/dfs/data: namenode clusterID = CID-00699a28-9ec0-4301-ae92-5a04cf302565; datanode clusterID = CID-3c15bdd3-4886-41ab-a3a8-d899d0b607b9


[root@centos128 hadoop]# sbin/stop-dfs.sh 清空/tmp下相关的文件
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode



[root@centos128 hadoop]# ll /tmp
total 0
drwxr-xr-x. 3 root root 20 Jun 27 23:46 hadoop
drwxr-xr-x. 4 root root 31 Jun 27 23:46 hadoop-root
drwxr-xr-x. 2 root root 6 Jun 28 00:57 hsperfdata_root
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_0_0_0_0_50070_hdfs____w2cu08
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_0_0_0_0_50090_secondary____y6aanv
drwxr-xr-x. 3 root root 21 Jun 27 23:01 Jetty_localhost_33451_datanode____ihzion
drwxr-xr-x. 3 root root 21 Jun 28 00:19 Jetty_localhost_34384_datanode____gijfbp
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_localhost_38324_datanode____wwolqv
drwxr-xr-x. 3 root root 21 Jun 27 23:49 Jetty_localhost_40770_datanode____.cx9tc5
drwxr-xr-x. 3 root root 21 Jun 27 23:59 Jetty_localhost_43407_datanode____.hgiwfx
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
drwx------. 3 root root 17 Jun 28 00:27 systemd-private-fc429f3080ff45c493f8d0afc4a92699-vmtoolsd.service-j5KhNK
[root@centos128 hadoop]# cd /tmp
[root@centos128 tmp]# rm -rf hsp* Jett* systemd* hadopp*
[root@centos128 tmp]# ll
total 0
drwxr-xr-x. 3 root root 20 Jun 27 23:46 hadoop
drwxr-xr-x. 4 root root 31 Jun 27 23:46 hadoop-root
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
[root@centos128 tmp]# rm -rf hadoop*
[root@centos128 tmp]# ll
total 0
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz


重新format namenode
bin/hdfs namenode -format
sbin/start-dfs.sh


Browse the web interface for the NameNode; by default it is available at:
NameNode - http://192.168.44.128:50070/
发现有一个live nodes ,并且Datanode Information 有信息

[root@centos128 hadoop]# bin/hdfs dfs -mkdir /user
[root@centos128 hadoop]# bin/hdfs dfs -mkdir /user/root
[root@centos128 hadoop]# bin/hdfs dfs -put etc/hadoop input
没有报错,正常。下面继续操作

[root@centos128 hadoop]# bin/hdfs dfs -ls
Found 2 items
drwxr-xr-x - root supergroup 0 2017-06-28 01:05 input
drwxr-xr-x - root supergroup 0 2017-06-28 01:06 output
[root@centos128 hadoop]# bin/hdfs dfs -ls output
Found 2 items
-rw-r--r-- 1 root supergroup 0 2017-06-28 01:06 output/_SUCCESS
-rw-r--r-- 1 root supergroup 220 2017-06-28 01:06 output/part-r-00000
[root@centos128 hadoop]# bin/hdfs dfs -ls input/hadoop
ls: `input/hadoop': No such file or directory
[root@centos128 hadoop]# bin/hdfs dfs -ls input
Found 29 items
-rw-r--r-- 1 root supergroup 4942 2017-06-28 01:05 input/capacity-scheduler.xml
-rw-r--r-- 1 root supergroup 1335 2017-06-28 01:05 input/configuration.xsl
-rw-r--r-- 1 root supergroup 318 2017-06-28 01:05 input/container-executor.cfg
-rw-r--r-- 1 root supergroup 884 2017-06-28 01:05 input/core-site.xml
-rw-r--r-- 1 root supergroup 3804 2017-06-28 01:05 input/hadoop-env.cmd
-rw-r--r-- 1 root supergroup 4696 2017-06-28 01:05 input/hadoop-env.sh
-rw-r--r-- 1 root supergroup 2490 2017-06-28 01:05 input/hadoop-metrics.properties
-rw-r--r-- 1 root supergroup 2598 2017-06-28 01:05 input/hadoop-metrics2.properties
-rw-r--r-- 1 root supergroup 9683 2017-06-28 01:05 input/hadoop-policy.xml
-rw-r--r-- 1 root supergroup 867 2017-06-28 01:05 input/hdfs-site.xml
-rw-r--r-- 1 root supergroup 1449 2017-06-28 01:05 input/httpfs-env.sh
-rw-r--r-- 1 root supergroup 1657 2017-06-28 01:05 input/httpfs-log4j.properties
-rw-r--r-- 1 root supergroup 21 2017-06-28 01:05 input/httpfs-signature.secret
-rw-r--r-- 1 root supergroup 620 2017-06-28 01:05 input/httpfs-site.xml
-rw-r--r-- 1 root supergroup 3518 2017-06-28 01:05 input/kms-acls.xml
-rw-r--r-- 1 root supergroup 1611 2017-06-28 01:05 input/kms-env.sh
-rw-r--r-- 1 root supergroup 1631 2017-06-28 01:05 input/kms-log4j.properties
-rw-r--r-- 1 root supergroup 5546 2017-06-28 01:05 input/kms-site.xml
-rw-r--r-- 1 root supergroup 13661 2017-06-28 01:05 input/log4j.properties
-rw-r--r-- 1 root supergroup 951 2017-06-28 01:05 input/mapred-env.cmd
-rw-r--r-- 1 root supergroup 1383 2017-06-28 01:05 input/mapred-env.sh
-rw-r--r-- 1 root supergroup 4113 2017-06-28 01:05 input/mapred-queues.xml.template
-rw-r--r-- 1 root supergroup 758 2017-06-28 01:05 input/mapred-site.xml.template
-rw-r--r-- 1 root supergroup 10 2017-06-28 01:05 input/slaves
-rw-r--r-- 1 root supergroup 2316 2017-06-28 01:05 input/ssl-client.xml.example
-rw-r--r-- 1 root supergroup 2697 2017-06-28 01:05 input/ssl-server.xml.example
-rw-r--r-- 1 root supergroup 2250 2017-06-28 01:05 input/yarn-env.cmd
-rw-r--r-- 1 root supergroup 4567 2017-06-28 01:05 input/yarn-env.sh
-rw-r--r-- 1 root supergroup 690 2017-06-28 01:05 input/yarn-site.xml
[root@centos128 hadoop]# bin/hdfs dfs -get output output
[root@centos128 hadoop]# cat output/*
cat: output/output: Is a directory
1 dfsadmin














dfsput操作报“There are 0 datanode(s) running and no node(s) are excluded in this operation” https://www.cppentry.com/bencandy.php?fid=115&id=180434

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇大数据之(3)Hadoop环境MapReduc.. 下一篇大数据笔试题大全