设为首页 加入收藏

TOP

Linux运维:Hadoop集群
2018-11-13 14:16:14 】 浏览:57
Tags:Linux 运维 Hadoop 集群

一、安装配置hadoop

  • 注意:hadoop的配置对内存有要求!!!
  • 配置hadoop、jdk(jdk辅助)
  • 以hadoop用户操作
[root@server1 ~]# ls
hadoop-2.7.3.tar.gz  jdk-7u79-linux-x64.tar.gz
[root@server1 ~]# useradd -u 800 hadoop
[root@server1 ~]# mv * /home/hadoop/
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ ls
hadoop-2.7.3.tar.gz  jdk-7u79-linux-x64.tar.gz
  • 解压,配置软链接
[hadoop@server1 ~]$ tar zxf hadoop-2.7.3.tar.gz 
[hadoop@server1 ~]$ tar zxf jdk-7u79-linux-x64.tar.gz 
[hadoop@server1 ~]$ ln -s jdk1.7.0_79/ java
[hadoop@server1 ~]$ ln -s hadoop-2.7.3 hadoop
  • 配置java,当jdk更新时,更改软链接即可
[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ cd etc/hadoop/
[hadoop@server1 hadoop]$ vim hadoop-env.sh 
# The java implementation to use.
export JAVA_HOME=/home/hadoop/java
  • 测试hadoop
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ mkdir input
[hadoop@server1 hadoop]$ cp etc/hadoop/* input/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar 

[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'

[hadoop@server1 hadoop]$ cat output/*
6   dfs.audit.logger
4   dfs.class
3   dfs.server.namenode.
2   dfs.period
2   dfs.audit.log.maxfilesize
2   dfs.audit.log.maxbackupindex
1   dfsmetrics.log
1   dfsadmin
1   dfs.servers
1   dfs.file

二、数据操作

1、配置hadoop
[hadoop@server1 hadoop]$ cd etc/hadoop/
[hadoop@server1 hadoop]$ vim core-site.xml 
<configuration>
    <property>
            <name>fs.defaultFS</name>
                    <value>hdfs://172.25.120.1:9000</value>
                        </property>
</configuration>

[hadoop@server1 hadoop]$ vim slaves 
172.25.120.1

[hadoop@server1 hadoop]$ vim hdfs-site.xml 
<configuration>
    <property>
            <name>dfs.replication</name>
                    <value>1</value>
                        </property>
</configuration>

2、配置ssh

[hadoop@server1 hadoop]$ ssh-keygen 
[hadoop@server1 hadoop]$ cd 
[hadoop@server1 ~]$ cd .ssh/
[hadoop@server1 .ssh]$ cp id_rsa.pub authorized_keys
[hadoop@server1 .ssh]$ ssh localhost
[hadoop@server1 ~]$ logout
[hadoop@server1 .ssh]$ ssh 172.25.120.1
[hadoop@server1 ~]$ logout
[hadoop@server1 .ssh]$ ssh server1
[hadoop@server1 ~]$ logout
[hadoop@server1 .ssh]$ ssh 0.0.0.0
[hadoop@server1 ~]$ logout
3、启动dfs
  • 格式化(文件存于/tmp)
[hadoop@server1 ~]$ pwd
/home/hadoop
[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ bin/hdfs namenode -format

[hadoop@server1 hadoop]$ ls /tmp/
hadoop-hadoop  hsperfdata_hadoop
  • 启动dfs
[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.120.1: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-server1.out
  • 配置环境变量
[hadoop@server1 ~]$ vim .bash_profile 
PATH=$PATH:$HOME/bin:~/java/bin

[hadoop@server1 ~]$ logout
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ jps
1896 SecondaryNameNode
1713 DataNode
1620 NameNode
2031 Jps
  • 处理文件系统
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -usage   ##查看用法

[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input/ 
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreducexamples-2.7.3.jar wordcount input output

三、分布式文件存储

1、namenode(172.25.120.1)
[root@server1 ~]# yum install -y nfs-utils
[root@server1 ~]# /etc/init.d/rpcbind start
Starting rpcbind:                                          [  OK  ]
[root@server1 ~]# vim /etc/exports 
/home/hadoop    *(rw,anonuid=800,anongid=800)

[root@server1 ~]# /etc/init.d/nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

[root@server1 ~]# exportfs -v
/home/hadoop    <world>(rw,wdelay,root_squash,no_subtree_check,anonuid=800,anongid=800)
[root@server1 ~]# exportfs -rv
exporting *:/home/hadoop
2、datanode(172.25.120.2和172.25.120.3:操作一致)
[root@server2 ~]# yum install -y nfs-utils
[root@server2 ~]# /etc/init.d/rpcbind start
Starting rpcbind:                                          [  OK  ]
[root@server2 ~]# useradd -u 800 hadoop
[root@server2 ~]# showmount -e 172.25.120.1
Export list for 172.25.120.1:
/home/hadoop *
[root@server2 ~]# mount 172.25.120.1:/home/hadoop/ /home/hadoop/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332  936680  17225672   6% /
tmpfs                           510200       0    510200   0% /dev/shm
/dev/vda1                       495844   33480    436764   8% /boot
172.25.120.1:/home/hadoop/    19134336 1968384  16193920  11% /home/hadoop
3、配置
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ cd hadoop/etc/
[hadoop@server1 etc]$ vim hadoop/slaves 
172.25.120.2
172.25.120.3

[hadoop@server1 etc]$ vim hadoop/hdfs-site.xml 
<configuration>
    <property>
            <name>dfs.replication</name>
                    <value>2</value>
                        </property>
</configuration>

[hadoop@server1 etc]$ cd /tmp/
[hadoop@server1 tmp]$ rm -fr *
  • 测试ssh服务
[hadoop@server1 tmp]$ ssh server2
[hadoop@server2 ~]$ logout
[hadoop@server1 tmp]$ ssh server3
[hadoop@server3 ~]$ logout
[hadoop@server1 tmp]$ ssh 172.25.120.2
[hadoop@server2 ~]$ logout
[hadoop@server1 tmp]$ ssh 172.25.120.3
[hadoop@server2 ~]$ logout
  • 重新格式化
[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ ls /tmp/
hadoop-hadoop  hsperfdata_hadoop
  • 启动dfs:namenode和datanode节点分开
[hadoop@server1 hadoop]$ sbin/start-dfs.sh
Starting namenodes on [server1]
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.120.2: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server2.out
172.25.120.3: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-server1.out
  • namenode节点:
[hadoop@server1 hadoop]$ jps
4303 SecondaryNameNode
4115 NameNode
4412 Jps
  • datanode节点:
[root@server2 ~]# su - hadoop
[hadoop@server2 ~]$ jps
1261 Jps
1167 DataNode
4、处理文件(datanode实时同步)
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/ input

四、节点的添加与删除

1、在线添加server4(172.25.120.4)
1、在线添加server4(172.25.120.4)
[root@server4 ~]# yum install -y nfs-utils
[root@server4 ~]# useradd -u 800 hadoop
[root@server4 ~]# mount 172.25.120.1:/home/hadoop/ /home/hadoop/
[root@server4 ~]# su - hadoop
[hadoop@server4 ~]$ vim hadoop/etc/hadoop/slaves 
172.25.120.2
172.25.120.3
172.25.120.4
  • namenode端测试ssh:
[hadoop@server1 ~]$ ssh server4
[hadoop@server4 ~]$ logout
[hadoop@server1 ~]$ ssh 172.25.120.4
[hadoop@server4 ~]$ logout

[hadoop@server4 ~]$ cd hadoop
[hadoop@server4 hadoop]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server4.out
[hadoop@server4 hadoop]$ jps
1250 Jps
1177 DataNode
2、在线删除server2(172.25.120.2)
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim hdfs-site.xml 
    <property>
        <name>dfs.hosts.exclude</name>
        <value>/home/hadoop/hadoop/etc/hadoop/hosts-exclude</value>
    </property>

[hadoop@server1 hadoop]$ vim hosts-exclude
172.25.120.2    ##删除的节点IP

[hadoop@server1 hadoop]$ dd if=/dev/zero of=bigfile bs=1M count=200
[hadoop@server1 hadoop]$ ../../bin/hdfs dfs -put bigfile 
[hadoop@server1 hadoop]$ vim slaves
172.25.120.3
172.25.120.4

[hadoop@server1 hadoop]$ ../../bin/hdfs dfsadmin -refreshNodes ##刷新节点
Refresh nodes successful
[hadoop@server1 hadoop]$ ../../bin/hdfs dfsadmin -report
Name: 172.25.120.2:50010 (server2)
Hostname: server2
Decommission Status : Decommission in progress
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 135581696 (129.30 MB)
Non DFS Used: 1954566144 (1.82 GB)
DFS Remaining: 17503408128 (16.30 GB)
DFS Used%: 0.69%
DFS Remaining%: 89.33%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Jul 21 14:23:43 CST 2018

##出现Decommissioned时,查看 http://172.25.120.1:50070/dfshealth.html#tab-datanode
Name: 172.25.120.2:50010 (server2)
Hostname: server2
Decommission Status : Decommissioned
  • server2关闭节点
[hadoop@server2 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server2 hadoop]$ sbin/hadoop-daemon.sh stop datanode
stopping datanode
[hadoop@server2 hadoop]$ jps
1497 Jps
  • hadoop页面查看
    这里写图片描述
3、yarn模式
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@server1 hadoop]$ vim mapred-site.xml
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

[hadoop@server1 hadoop]$ vim yarn-site.xml 
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

[hadoop@server1 hadoop]$ ../../sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-server1.out
172.25.120.3: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-server3.out
172.25.120.4: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-server4.out
  • namenode
[hadoop@server1 hadoop]$ jps
5434 ResourceManager
4303 SecondaryNameNode
4115 NameNode
5691 Jps
  • datanode
[hadoop@server3 ~]$ jps
1596 Jps
1165 DataNode
1498 NodeManager

五、Zookeeper集群搭建

  • 注意:所有节点清空 /tmp
1、server5主机
[root@server5 ~]# yum install- y  nfs-utils
[root@server5 ~]# /etc/init.d/rpcbind start
Starting rpcbind:                                          [  OK  ]

[root@server5 ~]# useradd -u 800 hadoop
[root@server5 ~]# mount 172.25.120.1:/home/hadoop/ /home/hadoop/
[root@server5 ~]# su - hadoop
[hadoop@server5 ~]$ ls
hadoop               java                       zookeeper-3.4.9.tar.gz
hadoop-2.7.3         jdk1.7.0_79
hadoop-2.7.3.tar.gz  jdk-7u79-linux-x64.tar.gz
2、server1主机
  • 停止所有服务
[hadoop@server1 hadoop]$ ../../sbin/stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [server1]
server1: stopping namenode
172.25.120.3: stopping datanode
172.25.120.4: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
172.25.120.4: stopping nodemanager
172.25.120.3: stopping nodemanager
172.25.120.4: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
172.25.120.3: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
  • 测试server5 ssh
[hadoop@server1 ~]$ ssh server5
[hadoop@server5 ~]$ logout
[hadoop@server1 ~]$ ssh 172.25.120.5
[hadoop@server5 ~]$ logout
  • 配置zookeeper
[hadoop@server1 ~]$ tar zxf zookeeper-3.4.9.tar.gz 
[hadoop@server1 ~]$ cd zookeeper-3.4.9
[hadoop@server1 zookeeper-3.4.9]$ cd conf/
[hadoop@server1 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@server1 conf]$ vim zoo.cfg 
server.1=172.25.120.2:2888:3888
server.2=172.25.120.3:2888:3888
server.3=172.25.120.4:2888:3888
3、server2、3、4配置
  • 注意myid的不同:分别是1、2、3
[hadoop@server2 hadoop]$ cd /tmp/
[hadoop@server2 tmp]$ mkdir zookeeper
[hadoop@server2 tmp]$ echo 1 > zookeeper/myid
[hadoop@server2 tmp]$ cat zookeeper/myid 
1

[hadoop@server3 tmp]$ cat zookeeper/myid 
2

[hadoop@server4 tmp]$ cat zookeeper/myid
3
  • 3个DN主机启动zookeeper集群
[hadoop@server4 tmp]$ cd ~/zookeeper-3.4.9
[hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
4、查看所有节点信息
[hadoop@server2 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower

[hadoop@server3 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower

[hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: leader

5、leader(server4)测试
[hadoop@server4 zookeeper-3.4.9]$ bin/zkCli.sh
Connecting to localhost:2181
WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181(CONNECTED) 0] ls
[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 2] ls /zookeeper
[quota]
[zk: localhost:2181(CONNECTED) 3] ls /zookeeper/quota
[]
[zk: localhost:2181(CONNECTED) 5] quit 
Quitting...

六、Zookeeper高可用

1、配置hadoop
  • 配置slaves
[hadoop@server1 ~]$ cd hadoop/etc/
[hadoop@server1 etc]$ vim hadoop/slaves 
172.25.120.2
172.25.120.3
172.25.120.4
  • 配置core-site.xml
[hadoop@server1 etc]$ vim hadoop/core-site.xml 
<configuration>

<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>

<property>
<name>ha.zookeeper.quorum</name>
<value>172.25.120.2:2181,172.25.120.3:2181,172.25.120.4:2181</value>
</property>

</configuration>
  • 配置hdfs-site.xml
[hadoop@server1 etc]$ vim hadoop/hdfs-site.xml 
<configuration>
    <property>
            <name>dfs.replication</name>
                    <value>3</value>
                        </property>

<property>
<name>dfs.nameservices</name>
<value>masters</value>
</property>

<property>
<name>dfs.ha.namenodes.masters</name>
<value>h1,h2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.masters.h1</name>
<value>172.25.120.1:9000</value>
</property>

<property>
<name>dfs.namenode.http-address.masters.h1</name>
<value>172.25.120.1:50070</value>
</property>

<property>
<name>dfs.namenode.rpc-address.masters.h2</name>
<value>172.25.120.5:9000</value>
</property>

<property>
<name>dfs.namenode.http-address.masters.h2</name>
<value>172.25.120.5:50070</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://172.25.120.2:8485;172.25.120.3:8485;172.25.120.4:8485/masters</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/tmp/journaldata</value>
</property>

<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.masters</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvid
er</value>

<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>

</configuration>

[hadoop@server1 etc]$ cd ~/hadoop
  • 格式化hdfs集群
[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ scp -r /tmp/hadoop-hadoop 172.25.120.5:/tmp/

##查看server5主机
[root@server5 ~]# ls /tmp/
hadoop-hadoop
2、3个DN主机启动journalnod
[hadoop@server3 zookeeper-3.4.9]$ cd ~/hadoop
[hadoop@server3 hadoop]$ sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-journalnode-server3.out

查看3个DN主机zookeeper集群状态
[hadoop@server3 hadoop]$ jps
1881 DataNode
1698 QuorumPeerMain
1983 Jps
1790 JournalNode
3、NN主机格式化zookeeper
  • 格式化后,启动zookeeper
[hadoop@server1 hadoop]$ bin/hdfs zkfc -formatZK

[hadoop@server1 hadoop]$ sbin/start-dfs.sh
Starting namenodes on [server1 server5]
server5: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server5.out
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.120.4: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server4.out
172.25.120.2: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server2.out
172.25.120.3: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server3.out
Starting journal nodes [172.25.120.2 172.25.120.3 172.25.120.4]
172.25.120.3: journalnode running as process 1790. Stop it first.
172.25.120.4: journalnode running as process 1706. Stop it first.
172.25.120.2: journalnode running as process 1626. Stop it first.
Starting ZK Failover Controllers on NN hosts [server1 server5]
server1: starting zkfc, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-zkfc-server1.out
server5: starting zkfc, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-zkfc-server5.out
  • 查看zookeeper集群
[hadoop@server1 hadoop]$ jps
6694 Jps
6646 DFSZKFailoverController
6352 NameNode

[hadoop@server5 ~]$ jps
1396 DFSZKFailoverController
1298 NameNode
1484 Jps
4、测试高可用
[hadoop@server5 ~]$ jps
1396 DFSZKFailoverController
1298 NameNode
1484 Jps
[hadoop@server5 ~]$ kill -9 1298
[hadoop@server5 ~]$ jps
1396 DFSZKFailoverController
1515 Jps
  • server1切换为 master
  • server5再次启动,状态为 standby
[hadoop@server5 hadoop]$ sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server5.out
5、DN主机查看
[hadoop@server2 hadoop]$ cd ~/zookeeper-3.4.9
[hadoop@server2 zookeeper-3.4.9]$ bin/zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 3] ls /hadoop-ha/masters
[ActiveBreadCrumb, ActiveStandbyElectorLock]
[zk: localhost:2181(CONNECTED) 4] get /hadoop-ha/masters/Active

ActiveBreadCrumb           ActiveStandbyElectorLock
[zk: localhost:2181(CONNECTED) 4] get /hadoop-ha/masters/ActiveBreadCrumb

mastersh1server1 F(>
cZxid = 0x10000000a
ctime = Sat Jul 21 16:32:30 CST 2018
mZxid = 0x10000000e
mtime = Sat Jul 21 16:34:02 CST 2018
pZxid = 0x10000000a
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 28
numChildren = 0
七、yarn的高可用
1、server1主机
  • 配置mapred-site.xml
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim mapred-site.xml
<configuration>
    <property>
            <name>mapreduce.framework.name</name>
                    <value>yarn</value>
                        </property>

</configuration>
  • 配置yarn-site.xml
[hadoop@server1 hadoop]$ vim yarn-site.xml 
<configuration>

<!-- Site specific YARN configuration properties -->

    <property>
            <name>yarn.nodemanager.aux-services</name>
                    <value>mapreduce_shuffle</value>
                        </property>

<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>

<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>RM_CLUSTER</value>
</property>

<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>

<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>172.25.120.1</value>
</property>

<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>172.25.120.5</value>
</property>

<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>

<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>

<property>
<name>yarn.resourcemanager.zk-address</name>
<value>172.25.120.2:2181,172.25.120.3:2181,172.25.120.4:2181</value>
</property>

</configuration>
  • 配置regionservers,启动hbase
[hadoop@server1 ~]$ cd hbase-1.2.4/conf/
[hadoop@server1 conf]$ vim regionservers
172.25.120.2
172.25.120.3
172.25.120.4

[hadoop@server1 hadoop]$ ../../sbin/start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-server1.out
172.25.120.3: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-server3.out
172.25.120.4: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-server4.out
172.25.120.2: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-server2.out
[hadoop@server1 hadoop]$ jps
1927 Jps
1837 ResourceManager
1325 NameNode
1602 DFSZKFailoverController
2、server5主机
  • 手动开启RM节点
[hadoop@server5 ~]$ cd hadoop
[hadoop@server5 hadoop]$ sbin/yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-server5.out

[hadoop@server5 logs]$ jps
3250 Jps
1694 ResourceManager
1209 NameNode
3184 DFSZKFailoverController
  • 查看NM节点信息
[hadoop@server3 sbin]$ jps
1549 NodeManager
1186 QuorumPeerMain
1293 JournalNode
1869 Jps
1381 DataNode
3、访问 http://172.25.120.1:8088/cluster/cluster 查看状态

这里写图片描述

八、Hbase分布式部署

1、配置Hbase
[hadoop@server1 ~]$ tar zxf hbase-1.2.4-bin.tar.gz 
[hadoop@server1 ~]$ cd hbase-1.2.4
[hadoop@server1 hbase-1.2.4]$ ls
bin          conf  hbase-webapps  lib          NOTICE.txt
CHANGES.txt  docs  LEGAL          LICENSE.txt  README.txt
[hadoop@server1 hbase-1.2.4]$ cd conf/
[hadoop@server1 conf]$ vim hbase-env.sh 
[hadoop@server1 conf]$ vim regionservers 
[hadoop@server1 conf]$ vim hbase-site.xml 
2、启动Hbase
[hadoop@server1 hbase-1.2.4]$ bin/start-hbase.sh 
starting master, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-master-server1.out
172.25.120.3: starting regionserver, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-server3.out
172.25.120.4: starting regionserver, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-server4.out
172.25.120.2: starting regionserver, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-server2.out

[hadoop@server1 hbase-1.2.4]$ jps
1837 ResourceManager
2567 HMaster
1325 NameNode
1602 DFSZKFailoverController
2634 Jps
  • server需要手动启动Hbase
[hadoop@server5 hbase-1.2.4]$ bin/hbase-daemon.sh start master
starting master, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-master-server1.out

[hadoop@server5 hbase-1.2.4]$ jps
2253 HMaster
1694 ResourceManager
1209 NameNode
2317 Jps
3、测试
[hadoop@server1 hbase]$ bin/hbase shell
hbase(main):003:0> create 'test', 'cf'
0 row(s) in 1.2200 seconds
hbase(main):003:0> list 'test'
TABLE
test
1 row(s) in 0.2150 seconds
=> ["test"]
hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0560 seconds
hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds
hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 seconds
hbase(main):007:0> scan 'test'
ROW
COLUMN+CELL
row1
column=cf:a, timestamp=1488879391939, value=value1
row2
column=cf:b, timestamp=1488879402796, value=value2
row3
column=cf:c, timestamp=1488879410863, value=value3
3 row(s) in 0.2770 seconds
  • 查看hdfs
[hadoop@server5 hadoop]$ bin/hdfs dfs -ls /
Found 3 items
drwxr-xr-x - hadoop supergroup 0 2017-03-07 23:56 /hbase
drwx------ - hadoop supergroup 0 2017-03-04 17:50 /tmp
drwxr-xr-x - hadoop supergroup 0 2017-03-04 17:38 /user
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇【hadoop】hadoop参数优化 下一篇hadoop books

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目