设为首页 加入收藏

TOP

大数据之Spark集群部署(二)
2015-11-21 01:24:46 来源: 作者: 【 】 浏览:9
Tags:数据 Spark 集群 部署
-hdfs-namenode start
cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop1.log

chkconfig
chkconfig zookeeper-server off

十二、在第二节点
hdfs dfs
cd /u01/name
sudo -u hdfs hdfs namenode -bootstrapStandby
cat /var/log/hadoop-hdfs/hadoop-hdfs-journalnode-hadoop3.log
service hadoop-hdfs-namenode start
cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop1.log

十三、在第一节点启动hdfs-datanode
service hadoop-hdfs-datanode start

十四、在第二节点启动hdfs-datanode
service hadoop-hdfs-datanode start

十五、在第三节点启动hdfs-datanode
service hadoop-hdfs-datanode start

十六、在第一节点
hdfs
hdfs haadmin -transitionToActive nn1

十七,在第一节点配置zookeeper
chkconfig
cd /etc/zookeeper/conf
vim zoo.cfg(添加)
service.1=hadoop1:2888:3888
service.2=hadoop2:2888:3888
service.3=hadoop3:2888:3888

cd /var/lib/zookeeper
vim myid
1
chown zookeeper:zookeper myid
service zookeeper-server start
service zookeeper-server init
cat /var/log/zookeeper/zookeeper.log
/usr/java/default/bin/jps

十八、节点二
cd /etc/zookeeper/conf
vim zoo.cfg(添加)
service.1=hadoop1:2888:3888
service.2=hadoop2:2888:3888
service.3=hadoop3:2888:3888
cd /var/lib/zookeeper
vim myid
2
chown zookeeper:zookeper myid
service zookeeper-server init –myid=2
service zookeeper-server start
cat /var/log/zookeeper/zookeeper.log

#rpm -qa|grep zookeeper 如没装zookeper会报错
#yum install zookeeper-server

十九、在第三个节点
yum install zookeeper-server
cd /etc/zookeeper/conf
vim zoo.cfg(添加)
service.1=hadoop1:2888:3888
service.2=hadoop2:2888:3888
service.3=hadoop3:2888:3888
cd /var/lib/zookeeper
vim myid
3
chown zookeeper:zookeper myid
service zookeeper-server init –myid=3
service zookeeper-server start
cat /var/log/zookeeper/zookeeper.log

二十、安装zkfc,做主备切换
1.在第一个节点
yum install hadoop-hdfs-zkfc
chkconfig hadoop-hdfs-zkfc off
service hadoop-hdfs-zkfc start
service hadoop-hdfs-zkfc start -formatZK
cat /var/log/hadoop-hdfs/hadoop-hdfs-zkfc-hadoop1.log
hdfs zkfc -formatZK
service hadoop-hdfs-zkfc start
/usr/lib/zookeeper/bin/zkCli.sh
hdfs zkfc -h
hdfs zkfc -formatZK
service hadoop-hdfs-zkfc start
sudo -u hdfs hdfs dfs -ls / #检查hdfs-site.xml中dfs.client.failover.proxy.provider.guoyijin
hadoop fs -ls /
sudo -u hdfs hdfs dfs -ls hdfs://hadoop1:8020/
sudo -u hdfs hdfs dfs -mkdir hdfs://hadoop1:8020/user
sudo -u hdfs hdfs dfs -ls hdfs://hadoop1:8020/
service hadoop-hdfs-namenode restart
/usr/java/default/bin/jps
service hadoop-hdfs-zkfc start
hdfs dfs -ls /

2.在第二个节点
yum install hadoop-hdfs-zkfc
chkconfig hadoop-hdfs-zkfc off
/usr/lib/zookeeper/bin/zkCli.sh
hdfs haadmin
hdfs haadmin -failover nn2 nn1
service hadoop-hdfs-zkfc start
service hadoop-hdfs-namenode stop
/usr/java/default/bin/jps
hdfs dfs -ls /
service hadoop-hdfs-namenode restart
hdfs dfs -ls /

++++++开始:yarn ,mapreduce,spark
+++++yarn
二十一、在第三节点,启动yarn的主控节点,相当于master
service hadoop-yarn-resourcemanager
service hadoop-yarn-resourcemanager start
http://hadoop3:8088/cluster
cd /u01
chown yarn:yarn local
service hadoop-yarn-nodemanager start

二十二、在第一个节点启动yarn节点
service hadoop-yarn-nodemanager start
service hadoopy-yarn-nodemanager status
service hadoop-yarn-nodemanager stop
cd /etc/hadoop/conf
vim yarn-site.xml #注意目录不要写错
service hadoop-yarn-nodemanager start
service hadoop-yarn-nodemanager stop
cd /u01
chown yarn:yarn local
service hadoop-yarn-nodemanager start

二十三、在第二个节点启动yarn节点
cd /etc/hadoop/conf
vim yarn-site.xml #注意目录不要

首页 上一页 1 2 3 下一页 尾页 2/3/3
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
分享到: 
上一篇T-SQL动态查询(1)――简介 下一篇Row_number() OVER(PARTITION BY ..

评论

帐  号: 密码: (新用户注册)
验 证 码:
表  情:
内  容: