设为首页 加入收藏

TOP

大数据之Spark集群部署(三)
2015-11-21 01:24:46 来源: 作者: 【 】 浏览:11
Tags:数据 Spark 集群 部署
写错
service hadoop-yarn-nodemanager start
service hadoop-yarn-nodemanager stop
cd /u01
chown yarn:yarn local
service hadoop-yarn-nodemanager start

+++++mapreduce
二十四、在第一节点
cat /var/log/hadoop-yarn/yarn-yarn-nodemanager-hadoop1.log
hadoop fs -mkdir /user/history
sudo - hdfs
hadoop fs -mkdir /user/history
hadoop fs -chmod -R 1777 /user/history
hadoop fs -chmown mapred:hadoop /user/history
hadoop fs -mkdir -p /var/log/hadoop-yarn/app
hadoop fs -chown yarn:mapred /var/log/hadoop-yarn/app
hdfs dfs -ls -R /
hadoop fs -mkdir /tmp
hadoop fs -chmod 1777 /tmp
hdfs dfs -ls -R/
exit
pwd
cd /etc/hadoop/conf
cat yarn-site.xml

二十五、第三个节点
yum install hadoop-mapreduce-historyserver
service hadoop-mapreduce-historyserver start
cd /var/log/hadoop-mapreduce
cat maped-mapred-historyserver-hadoop3.out
vim /etc/hadoop/conf/dfs-site.xml
vim /etc/hadoop/conf/mapred-site.xml
cat /etc/hadoop/conf/yarn-site.xml
service hadoop-mapreduce-historyserver start
/usr/lib/java/bin/jps

二十六、第一个节点传到hdfs
su - hdfs
cd /var/log/hadoop-hdfs/
hadoop fs -mkdir /user/hdfs
hadoop fs -mkdir /user/hdfs/input
hadoop fs -put * /user/hdfs/input
hadoop fs -ls -R /user/hdfs

二十七、第一个节点跑一个任务
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount /user/hdfs/input /user/hdfs/output
http://hadoop3:8088/cluster
hadoop fs -ls /user/hdfs/output
hadoop fs -cat /user/hdfs/output/part-r-00000

+++++spark
二十八、第一个节点
cd spark/
tar -zxvf spark-1.4.1-bin-hadoop2.6.tgz -C /usr/lib
cd /usr/lib/
ln -s spark-1.4.1-bin-hadoop2.6 spark
cd /usr/lib/spark/conf
ls

上传scp gyj-spark.zip root@hadoop1:/u01/app/backup
unzip gyj-spark.zip
cd spark-defaults.conf
cat spark-env.sh
cp spark-* /usr/lib/spark/conf
cd /usr/lib/spark/conf
vim spark-defaults.conf
/usr/lib/spark/bin/spark-shell –master yarn-client
exit
sudo -u hdfs /usr/lib/spark/bin/spark-shell –master yarn-client
cd /usr/lib/spark/conf
cp log4j.properties.template log4j.propertie
vim log4j.propertie
WARN

su - hdfs
pwd
ll
vim .bash_profile
export SPARK_HOME=/usr/lib/spark
export JAVA_HOME=/usr/java/default
export PATH= PATH: JAVA_HOME/bin:$SPARK_HOME/bin

source .bash_profile

/usr/lib/spark/bin/spark-shell –master yarn-client

scala> val rdd=sc.textFile(“/user/hdfs/input”)
scala> rdd.count

首页 上一页 1 2 3 下一页 尾页 3/3/3
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
分享到: 
上一篇T-SQL动态查询(1)――简介 下一篇Row_number() OVER(PARTITION BY ..

评论

帐  号: 密码: (新用户注册)
验 证 码:
表  情:
内  容: