设为首页 加入收藏

TOP

Hadoop添加新节点
2019-03-04 12:19:42 】 浏览:56
Tags:Hadoop 添加 节点
  1. 登录要加入集群的新节点,并添加hadoop用户
useradd -m hadoop -s /bin/bash
passwd hadoop
su - hadoop
  1. 切换到hadoop用户,并将hadoop程序目录拷贝到hadoop的home目录
[hadoop@localhost ~]$  su - hadoop
[hadoop@localhost ~]$ scp -r 10.10.19.231:~/hadoop-2.7.3 ./
[hadoop@localhost ~]$ ls
hadoop-2.7.3
  1. 修改环境变量,主要是JAVA环境变量和hadoop可执行文件路径
[hadoop@localhost ~]$ vim .bash_profile 
export HADOOP_HOME=/home/hadoop/hadoop-2.7.3
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=$PATH:${JAVA_HOME}/bin:${JRE_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin
[hadoop@localhost ~]$ source .bash_profile 
  1. 修改新加节点的hostname
[root@localhost hadoop]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slave03
#然而并没有立即生效
[root@localhost hadoop]#  /etc/init.d/network restart
[root@localhost hadoop]# hostname 
localhost.localdomain
#立即生效,但若不修改network文件,这种方法重启后失效
[root@localhost hadoop]# sysctl kernel.hostname=slave03
kernel.hostname = slave03
[root@localhost hadoop]# hostname 
slave03
  1. 配置免密码登录,主要是master登录slave03
#这里随便从一台现有的slave节点拷贝其authorized_keys就可以
[hadoop@slave03 ~]$ scp 10.10.19.231:~/.ssh/authorized_keys ~/.ssh/
hadoop@10.10.19.231's password: 
authorized_keys                                                                               100%  410     0.4KB/s   00:00 
  1. 修改集群hosts文件
#先修改master主机的/etc/hosts
[root@master hadoop]# vim /etc/hosts
#然后将修改后的hosts文件分发到各slave节点
[root@master hadoop]# scp /etc/hosts slave03:/etc/
The authenticity of host 'slave03 (10.10.19.232)' can't be established.
RSA key fingerprint is 5d:21:da:08:0b:ef:ce:e3:6f:85:76:0e:17:68:4e:c0.
Are you sure you want to continue connecting (yes/no) yes
Warning: Permanently added 'slave03,10.10.19.232' (RSA) to the list of known hosts.
root@slave03's password: 
hosts                                                                                                      100%  242     0.2KB/s   00:00    
[root@master hadoop]# 
  1. 在master主机添加新节点到slaves文件,重启hadoop会用到
[hadoop@master ~]$ vim hadoop-2.7.3/etc/hadoop/slaves 
slave01
slave02
slave03
  1. 动态添加节点
    此时,hadoop系统正处于运行之中,要将新添加的节点动态加入到系统中,只需要在此节点上启动datanode和nodemanager
[hadoop@slave03 ~]$ hadoop-daemon.sh start datanode
starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave03.out
[hadoop@slave03 ~]$ yarn-daemon.sh start nodemanager
starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave03.out
#可以看到两者都启动了
[hadoop@slave03 ~]$ jps
18389 Jps
15104 DataNode
17299 NodeManager
  1. 验证
[hadoop@master ~]$ hdfs dfsadmin -report
#可以看到,slave03已经存在于hadoop中了
Name: 10.10.19.232:50010 (slave03)
#在slave03中也可以看到hadoop文件系统
[hadoop@slave03 ~]$ hdfs dfs -ls test
Found 3 items
-rw-r--r--   2 hadoop supergroup 1814999046 2017-03-22 20:13 test/t
-rw-r--r--   2 hadoop supergroup   16326828 2017-03-21 17:18 test/test.txt
-rw-r--r--   2 hadoop supergroup         14 2017-03-22 20:53 test/tt

  1. 重启hadoop验证,可以看到,都正常启动了
[hadoop@master ~]$ stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [master]
master: stopping namenode
slave01: stopping datanode
slave03: stopping datanode
slave02: stopping datanode
Stopping secondary namenodes [master]
master: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
slave01: stopping nodemanager
slave02: stopping nodemanager
slave03: stopping nodemanager
no proxyserver to stop
[hadoop@master ~]$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master.out
slave01: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave01.out
slave02: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave02.out
slave03: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave03.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-master.out
slave01: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave01.out
slave02: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave02.out
slave03: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave03.out
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Hadoop提交作业------>hadoop.. 下一篇RedHat6.5上安装Hadoop集群

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目