设为首页 加入收藏

TOP

HUE整合HDFS MR
2019-04-14 12:18:57 】 浏览:33
Tags:HUE 整合 HDFS
版权声明:Copyright ◎ https://blog.csdn.net/huoliangwu/article/details/84591893

HUE(HadoopUser Experience)管理工具HUE是一个开源的HadoopUl系统,它基于PythonWEB框架实现,通过使用HUE我们可以在浏览器端的Web控制台上与Hadoop集群进行交互来分析处理数据。
官网下载页面 http://gethue.com/category/release/

环境与软件
系统:CentOS 6.5 三台 搭建hadoop集群
软件:hue-3.7.0-cdh5.3.6.tar.gz

mini01 mini02 mini03
NameNode SecondaryNameNode
DataNode DataNode DataNode
ResourceManager JobHistoryServer
NodeManager NnodeManager NodeManager
HUE

1.准备环境依赖


LmNzZG4ubmV0L2h1b2xpYW5nd3U=,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述">
这里整理好了,可以直接yum安装

[hadoop@mini01 ~]$  sudo yum install -y ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel

2.解压HUE

[hadoop@mini01 tools]$ tar -zxvf hue-3.7.0-cdh5.3.6.tar.gz -C ../install

3.编译HUE

[hadoop@mini01 tools]$ cd ../install/hue-3.7.0-cdh5.3.6/

[hadoop@mini01 hue-3.7.0-cdh5.3.6]$ make apps

4.配置HUE

修改Hue.ini文件
路径:/home/hadoop/install/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini
修改内容参照如下

    secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o
    http_host=mini01
    http_port=8888
    time_zone=Asia/Shanghai
# Webserver runs as this user
    server_user=hue
    server_group=hue

与HDFS集成按照如下配置

[[hdfs_clusters]]
    # HA support by using HttpFs
    #hdfs如果配置了高可用,则需要使用hffpFs

    [[[default]]]     #我没有配置高可用所以端口是9000,如果高可用则是8020
      # Enter the filesystem uri
      fs_defaultfs=hdfs://mini01:9000

      # NameNode logical name.
      ## logical_name=

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      ## webhdfs_url=http://localhost:50070/webhdfs/v1
      #这里如果配置了高可用,那么端口就是14000
       webhdfs_url=http://mini01:50070/webhdfs/v1
      # Change this if your HDFS cluster is Kerberos-secured
      ##security_enabled=false

      # Default umask for file and directory creation, specified in an octal value.
      ## umask=022

      # Directory of the Hadoop configuration
      ## hadoop_conf_dir=$HADOOP_CONF_DIR when set or '/etc/hadoop/conf'
      #配置hadoop的一些配置文件路径
	  hadoop_conf_dir=/home/hadoop/install/hadoop-2.5.0-cdh5.3.6/etc/hadoop
      hadoop_hdfs_home=/home/hadoop/install/hadoop-2.5.0-cdh5.3.6
      hadoop_bin=/home/hadoop/install/hadoop-2.5.0-cdh5.3.6/bin

与YARN的集成相关配置如下

 [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      ## resourcemanager_host=localhost
resourcemanager_host=mini01
      # The port where the ResourceManager IPC listens on
      ## resourcemanager_port=8032
resourcemanager_port=8032
      # Whether to submit jobs to this cluster
      submit_to=True

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      ## resourcemanager_api_url=http://localhost:8088
resourcemanager_api_url=http://mini01:8088
      # URL of the ProxyServer API
      ## proxy_api_url=http://localhost:8088

      # URL of the HistoryServer API
      ## history_server_api_url=http://localhost:19888
      #历史服务器
 history_server_api_url=http://mini02:19888
      # In secure mode (HTTPS), if SSL certificates from Resource Manager's
      # Rest Server have to be verified against certificate authority
      ## ssl_cert_ca_verify=False

    # HA support by specifying multiple clusters
    # e.g.
   #配置HA高可用
    # [[[ha]]]
      # Resource Manager logical name (required for HA)
      ## logical_name=my-rm-name

5.配置Hadoop的配置文件

5.1 core-site.xml

  <property>
                <!--在任何地方代理hue用户-->
                <name>hadoop.proxyuser.hue.hosts</name>
                <value>*</value>
        </property>

        <property>
                <name>hadoop.proxyuser.hue.groups</name>
                <value>*</value>
        </property>

5.2 hdfs-site.xml

<!--Enable WebHDFS (REST API) in Namenodes and Datanodes-->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<!--关闭权限检查-->
<property>
     <name>dfs.permissions.enabled</name>
     <value>false</value>
</property>

5.3 httpfs-site.xml

<property>
<name>httpfs.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>httpfs.proxyuser.hue.groups</name>
<value>*</value>
</property>
<!--以上两个属性主要用于HUE服务与Hadoop服务不在同一台节点上所必须的配置。
提示:
* 如果没有配置NameNode的HA,HUE可以用WebHDFS来管理HDFS
* 如果配置了NameNodeHA,则HUE只可用HttpFS来管理HDFS-->

5.4 分发hadoop配置文件.

xsync install/hadoop-2.5.0-cdh5.3.6/etc/hadoop

xsync同步脚本,代码如下:

#!/bin/bash
#1 获取输入参数个数,如果没有参数,直接退出
pcount=$#
if((pcount==0));then
echo no args;
exit;
fi

#2 获取文件名称
p1=$1 fname=`basename $p1`
echo fname=$fname

#3 获取上级目录到绝对路径
pdir=`cd -P $(dirname $p1); pwd`
echo pdir=$pdir

#4 获取当前用户名称
user=`whoami`
#5 循环
for((host=1; host<4; host++)); do
echo $pdir/$fname $user@mini0$host:$pdir
echo --------------- mini0$host ----------------
rsync -rvl $pdir/$fname $user@mini0$host:$pdir
done

5.5 启动httpfs服务

[hadoop@mini01 install]$ ~/install/hadoop-2.5.0-cdh5.3.6/sbin/httpfs.sh start

6 测试

6.1 启动HDFS

[hadoop@mini01 install]$ start-dfs.sh

6.2 启动YARN

[hadoop@mini01 install]$ start-yarn.sh

6.3 启动HUB服务

[hadoop@mini01 install]$ ~/install/hue-3.7.0-cdh5.3.6/build/env/bin/supervisor

7.结果

[hadoop@mini02 sbin]$ xcall.sh jps
============= mini01 jps =============
1344 DataNode
1602 ResourceManager
1250 NameNode
1701 NodeManager
2045 Jps
============= mini02 jps =============
1635 NodeManager
1848 Jps
1530 DataNode
============= mini03 jps =============
1302 NodeManager
1448 Jps
1197 SecondaryNameNode
1134 DataNode
[hadoop@mini02 sbin]$ 

启动HUE服务出现如下提示,表示启动成功

[INFO] Not running as root, skipping privilege drop
starting server with options {'ssl_certificate': None, 'workdir': None, 'server_name': 'localhost', 'host': '192.168.13.128', 'daemonize': False, 'threads': 10, 'pidfile': None, 'ssl_private_key': None, 'server_group': 'hue', 'ssl_cipher_list': 'DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2', 'port': 8888, 'server_user': 'hue'}

测试一下 打开mini01的8888 WEB页面
第一次登录需要创建帐户
第一次登陆需要创建帐户,我们创建为admin 密码admin

登录成功后,界面如下:
第一次登录我们选择右上角的 File Browser 来管理HDFS平台的文件.可以上传,下载,查看文件内容等操作
HDFS
YARN的管理在 Job Browser 下.

HUE和Hive的整合


编程开发网
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Tensorflow第二课文件读取 下一篇HDFS: EditLog的完整性增强

评论

帐  号: 密码: (新用户注册)
验 证 码:
表  情:
内  容:

array(4) { ["type"]=> int(8) ["message"]=> string(24) "Undefined variable: jobs" ["file"]=> string(32) "/mnt/wp/cppentry/do/bencandy.php" ["line"]=> int(217) }