设为首页 加入收藏

TOP

SparkSQL使用之Thrift JDBC Server(一)
2014-11-23 21:26:33 来源: 作者: 【 】 浏览:35
Tags:SparkSQL 使用 Thrift JDBC Server

Thrift JDBC Server描述


Thrift JDBC Server使用的是HIVE0.12的HiveServer2实现。能够使用Spark或者hive0.12版本的beeline脚本与JDBC Server进行交互使用。Thrift JDBC Server默认监听端口是10000。


使用Thrift JDBC Server前需要注意:


1、将hive-site.xml配置文件拷贝到$SPARK_HOME/conf目录下;


2、需要在$SPARK_HOME/conf/spark-env.sh中的SPARK_CLASSPATH添加jdbc驱动的jar包


Thrift JDBC Server命令使用帮助:


cd $SPARK_HOME/sbin
start-thriftserver.sh --help



复制代码
Usage: ./sbin/start-thriftserver [options] [thrift server options]
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver
and executor classpaths.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor.


--conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.


--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath.


--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).


--help, -h Show this help message and exit
--verbose, -v Print additional debug output


Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: 1).
--supervise If given, restarts the driver on failure.


Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors.


YARN-only:
--executor-cores NUM Number of cores per executor (Default: 1).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.


Thrift server options:
--hiveconf Use value for given property


master的描述与Spark SQL CLI一致


beeline命令使用帮助:


cd $SPARK_HOME/bin
beeline --help


Usage: java org.apache.hive.cli.beeline.BeeLine
-u the JDBC URL to connect to
-n the username to connect as
-p the password to connect as
-d the driver class to use
-e query that should be executed
-f script file that should be executed
--color=[true/false] control whether color is used for display
--showHeader=[true/false] show column names in query results
--headerInterval=ROWS; the interval between which heades are displayed
--fastConnect=[true

首页 上一页 1 2 下一页 尾页 1/2/2
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
分享到: 
上一篇SparkSQL使用之JDBC代码访问Thrif.. 下一篇SparkSQL使用之Spark SQL CLI

评论

帐  号: 密码: (新用户注册)
验 证 码:
表  情:
内  容: