设为首页 加入收藏

TOP

kylin2.0之spark构建cube
2018-12-29 13:23:55 】 浏览:71
Tags:kylin2.0 spark 构建 cube
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/SONGCHUNHONG/article/details/77877607

在kylin2.0中引入了构建cube的spark引擎,因此在构建cube的时候用spark代替MR。

kylin2.0+spark1.6

kylin2.1+spark2.1.1

kylin2.0.0+HBase1.X

Hadoop等的底层依赖:hdp2.4,hive,hbase,yarn

1.修改Hadoop配置

在kylin.properties中配置好Hadoop的配置路径(注意要新建一个目录把底层依赖的Hadoop、hive、hbase等的配置都连接或copy过来)

kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.0.0-bin/hadoop-conf

core-site.xml, hdfs-site.xml, yarn-site.xml, hive-site.xml and hbase-site.xml

mkdir $KYLIN_HOME/hadoop-conf
ln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml 
ln -s /etc/hadoop/conf/hdfs-site.xml $KYLIN_HOME/hadoop-conf/hdfs-site.xml 
ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml 
ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml 
cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml 
vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value from "tez" to "mr")

2.检查spark的配置

kylin运行时通过$KYLIN_HOME/conf/kylin.properties中的kylin.engine.spark-conf变量加载spark配置项;
包括:
kylin.engine.spark-conf.spark.master=yarn
kylin.engine.spark-conf.spark.submit.deployMode=cluster
kylin.engine.spark-conf.spark.yarn.queue=default
kylin.engine.spark-conf.spark.executor.memory=1G
kylin.engine.spark-conf.spark.executor.cores=2
kylin.engine.spark-conf.spark.executor.instances=1
kylin.engine.spark-conf.spark.eventLog.enabled=true
kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
#kylin.engine.spark-conf.spark.yarn.jar=hdfs://namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec

## uncomment for HDP
#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current

配置spark的jar包;
hadoop fs -mkdir -p /kylin/spark/
hadoop fs -put $KYLIN_HOME/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar /kylin/spark/
配置完spark的jar可以配置上面关于spark引擎的几个选项参数:
kylin.engine.spark-conf.spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
在创建cube时可以选择cube Engine为spark
到此,配置结束。在构建cube的时候就可以用spark引擎构建。
问题解决可以到kylin log/kylin.log里定位查找。
注意:
用MR情况:cube中韩勇超过12个维度的,或者有count distinct和top N等的度量。
用spark情况:cube的模式较为简单时。所有的度量仅为SUM/MIN/MAX/COUNT,且源数据规模中等时。


】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇spark数据导入导出 下一篇Spark性能调优-数据本地性

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目