设为首页 加入收藏

TOP

大数据之Spark集群部署(一)
2015-11-21 01:24:46 来源: 作者: 【 】 浏览:10
Tags:数据 Spark 集群 部署

一、官方文档:

http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cdh_ig_hdfs_cluster_deploy.html

二、下载包:archive.cloudera.com/cdh5 #Index of /cdh5/redhat/6/x86_64/cdh/cloudera-cdh5.repo/复制链接地址
1、下载yum源 #cd /etc/yum.repos.d/
wget http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/cloudera-cdh5.repo

2、查看yum包
yum list |grep cdh

3、安装hdfs
yum -y install hadoop-hdfs-namenode hadoop-hdfs-datanode hadoop-hdfs-journalnode #节点1,主节点
yum -y install hadoop-hdfs-datanode hadoop-hdfs-journalnode hadoop-client hadoop-yarn-nodemanager hadoop-yarn #节点2,从节点
yum -y install hadoop-hdfs-datanode hadoop-hdfs-journalnode hadoop-client hadoop-yarn-nodemanager hadoop-yarn #节点3

三、chkconfig
1.第三节点关闭4个服务
chkconfig hadoop-hdfs-datanode off
chkconfig hadoop-hdfs-journalnode off
chkconfig hadoop-yarn-nodemanager off
chkconfig hadoop-yarn-resourcemanager off

hadoop-hdfs-datanode 0:off 1:off 2:off 3:off 4:off 5:off 6:off
hadoop-hdfs-journalnode 0:off 1:off 2:off 3:off 4:off 5:off 6:off
hadoop-yarn-nodemanager 0:off 1:off 2:off 3:off 4:off 5:off 6:off
hadoop-yarn-resourcemanager 0:off 1:off 2:off 3:off 4:off 5:off 6:off

2.第二个节点关闭3个服务
chkconfig hadoop-hdfs-datanode off
chkconfig hadoop-hdfs-journalnode off
chkconfig hadoop-yarn-nodemanager off
chkconfig hadoop-yarn-resourcemanager off #这个服务在第二节点不用

3.第一个节点关闭4个服务
chkconfig hadoop-hdfs-namenode off
chkconfig hadoop-hdfs-datanode off
chkconfig hadoop-hdfs-journalnode off
chkconfig hadoop-yarn-nodemanager off #如果没有,要安装!

yum -y install hadoop-yarn-nodemanager

四、NTP配置

五、SSH配置

六、jdk安装(三节点)
1.下载jdk-7u25-linux-x64.rpm
2.安装jdk
rpm -qa |grep openjdk

rpm -e java-1.7.0-openjdk-src
rpm -e java-1.7.0-openjdk-devel
rpm -e java-1.7.0-openjdk-demo
rpm -e java-1.7.0-openjdk-javadoc
rpm -e java-1.7.0-openjdk

rpm -ivh jdk-7u25-linux-x64.rpm
rpm -qa |grep java
java -version
hadoop version
hadoop classpath

3.把/etc/profile环境全删除掉(如果配了)

七、在第一个节点操作
1.安装zookeeper,hadoop-mapreduce
yum list |grep hadoop
yum list |grep zookeeper
yum list |grep cdh

yum install -y zookeeper-server hadoop-mapreduce

rpm -qa |grep cdh
rpm -qa |grep cdh | sort

2.在/u01下创建目录并授权
mkdir name data journal local
chown hdfs:hdfs name data journal

3.copy hadoop配置文件
cd /etc/hadoop/conf
mv *.mv /etc/hadoop/conf
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml

4.启动hdfs-journalnode
service hadoop-hdfs-journalnode start

5.查看jps
/usr/java/default/bin/jps

6.查看hdfs日志
cd /var/log/hadoop-hdfs/
cat hadoop-hdfs-journalnode-hadoop1.log

八、拷贝配置文件到节点2节点3
scp hdfs-site.xml hadoop2:/etc/hadoop/conf
scp hdfs-site.xml hadoop3:/etc/hadoop/conf
scp core-site.xml hadoop2:/etc/hadoop/conf
scp core-site.xml hadoop3:/etc/hadoop/conf

九、在第二点启动hdfs-journalnode(name主备)
mkdir name data journal local
chown hdfs:hdfs name data journal
service hadoop-hdfs-journalnode start

cat /var/log/hadoop-hdfs/hadoop-hdfs-journalnode-hadoop3.log

十、在第三节点启动hdfs-journalnode(不用name)
mkdir data journal local
chown hdfs:hdfs data journal
service hadoop-hdfs-journalnode start

cat /var/log/hadoop-hdfs/hadoop-hdfs-journalnode-hadoop3.log

十一、在第一节点启动hdfs-namenode
service hadoop-hdfs-namenode start(如果报错查日志)
cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop1.log

hdfs命令
hdfs namenode -format

#cd /u01/name
#rm -rf current

sudo -u hdfs namenode -format
service hadoop

首页 上一页 1 2 3 下一页 尾页 1/3/3
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
分享到: 
上一篇T-SQL动态查询(1)――简介 下一篇Row_number() OVER(PARTITION BY ..

评论

帐  号: 密码: (新用户注册)
验 证 码:
表  情:
内  容: