设为首页 加入收藏

TOP

spark学习-hadoop安装与启动
2018-12-03 08:53:55 】 浏览:71
Tags:spark 学习 -hadoop 安装 启动
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/xfks55/article/details/80783375

文章连接

1.spark学习-hadoop安装与启动
2.spark学习-spark安装和启动


安装前准备

1.首先准备三台服务器.一台master,两台slave.

172.18.101.157 spark-master
172.18.101.162 spark-slave1
172.18.132.162 spark-slave2

2.设置免密登录

1. 生成私钥和公钥

[root@spark-master data]# ssh-keygen -t rsa

一直敲回车,最后生产密钥

Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:xmlOlzYrf0obAn5T5rN+nI39yf+lXTibaETnu72yEN8 root@spark-master
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|                 |
|       . . .. .  |
|      . S Bo o   |
|     . * * o+ o. |
|      . * *+ *oEo|
|       . * =Oo=B+|
|         .*=..B*X|
+----[SHA256]-----+
[root@spark-master data]# 

2. 将公钥复制到slave机器上

[root@spark-master data]# scp /root/.ssh/id_rsa.pub root@172.18.101.162:/data/
[root@spark-master data]# scp /root/.ssh/id_rsa.pub root@172.18.132.162:/data/

进入slave机器,如公钥导入授权文件中

[root@spark-slave1 ~]# cd /data/
[root@spark-slave1 data]# cat id_rsa.pub >> /root/.ssh/authorized_keys
#返回master机器,测试是否配置成功
#测试slave1
[root@spark-master ~]# ssh spark-slave1
Last login: Fri Jun 22 20:07:35 2018 from 172.18.101.157
Welcome to Alibaba Cloud Elastic Compute Service !
#测试slave2
[root@spark-master ~]# ssh spark-slave2
Last login: Fri Jun 22 20:07:31 2018 from 172.18.101.157
Welcome to Alibaba Cloud Elastic Compute Service !

3.安装jdk

vim /etc/profile
配置环境变量

JAVA_HOME=/opt/jdk8
CLASSPATH=.:$JAVA_HOME/lib.tools.jar
PATH=$JAVA_HOME/bin:$PATH

使环境变量生效

source /etc/profile

安装hadoop

1.配置网络连接

 将所有节点的hosts改成
vim /etc/hosts

127.0.0.1 localhost
172.18.101.157 spark-master
172.18.101.162 spark-slave1
172.18.132.162 spark-slave2

2.设置hadoop环保变量

export HADOOP_HOME=/usr/local/hadoop-2.7.6
export PATH=$PATH:$HADOOP_HOME/bin

3.修改配置文件

一共需要修改五个配置文件,位于安装路径下etc/hadoop目录下.
分别是core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml和slaves文件

core-site.xml

<configuration>
  <!-- 指定HDFS老大(namenode)的通信地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://spark-master:9000</value>
    </property>
    <!-- 指定hadoop运行时产生文件的存储路径 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/tmp</value>
    </property>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.name.dir</name>
        <value>file:/home/hadoop/hdfs/name</value>
        <description>namenode上存储hdfs名字空间元数据 </description>
    </property>

    <property>
        <name>dfs.data.dir</name>
        <value>file:/home/hadoop/hdfs/data</value>
        <description>datanode上数据块的物理存储位置</description>
    </property>

    <!-- 设置hdfs副本数量 -->
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

mapred-site.xml

<configuration>

   <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
   </property>
<!-- history服务器信息-->
    <property>
         <name>mapreduce.jobhistory.address</name>
         <value>spark-master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>spark-master:19888</value>
    </property>
</configuration>

yarn-site.xml

<configuration>
  <!-- 是否开启聚合日志 -->
     <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
     </property>
       <!-- 配置日志服务器的地址,work节点使用 -->
     <property>
         <name>yarn.log.server.url</name>
         <value>http://spark-master:19888/jobhistory/logs/</value>
      </property>
      <!-- 配置日志过期时间,单位秒 -->
      <property>
          <name>yarn.log-aggregation.retain-seconds</name>
          <value>86400</value>
      </property>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
     <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>spark-master:8099</value>
     </property>
        <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>spark-master</value>
     </property>
</configuration>

slaves文件 配置从节点

spark-slave1
spark-slave2

4.将hadoop的安装文件,copy到从节点

tar -cvf hadoop.tar hadoop-2.7.6/
[root@spark-master local]# scp hadoop.tar root@spark-slave1:/usr/local/
[root@spark-master local]# scp hadoop.tar root@spark-slave2:/usr/local/

进入两台slave,然后解压到当前目录下面

[root@spark-slave1 local]# tar xvf hadoop.tar

5.初始化namenode

hdfs namenode -format 

6.启动服务

[root@spark-master hadoop-2.7.6]# sbin/start-all.sh

spark-master: starting namenode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-namenode-spark-master.out
spark-slave2: starting datanode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-datanode-spark-slave2.out
spark-slave1: starting datanode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-datanode-spark-slave1.out
spark-slave2: /usr/local/hadoop-2.7.6/bin/hdfs: line 304: /opt/jdk1.8.0_144/bin/java: No such file or directory
spark-slave1: /usr/local/hadoop-2.7.6/bin/hdfs: line 304: /opt/jdk1.8.0_144/bin/java: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-secondarynamenode-spark-master.out
starting yarn daemons

上面日志说slave机器jdk路径不存在.这个是由于slave机器的jdk路径配置跟master机器上的路径不一样,我们去从机器上修改.改配置文件位于vim etc/hadoop/hadoop-env.sh

# The java implementation to use. 将这个地方修改成所有在机器的环境变量即可
export JAVA_HOME=/opt/jdk1.8.0_144

再次启动

[root@spark-master hadoop-2.7.6]# sbin/start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [spark-master]
spark-master: starting namenode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-namenode-spark-master.out
spark-slave1: starting datanode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-datanode-spark-slave1.out
spark-slave2: starting datanode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-datanode-spark-slave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-secondarynamenode-spark-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.7.6/logs/yarn-root-resourcemanager-spark-master.out
spark-slave1: starting nodemanager, logging to /usr/local/hadoop-2.7.6/logs/yarn-root-nodemanager-spark-slave1.out
spark-slave2: starting nodemanager, logging to /usr/local/hadoop-2.7.6/logs/yarn-root-nodemanager-spark-slave2.out

查看运行节点

这里写图片描述

启动完成之后可以通过jps命令查看运行的进程.

master机器上应该运行的进程为:

[root@spark-master hadoop-2.7.6]# jps
7360 NameNode
7778 ResourceManager
7592 SecondaryNameNode

slave机器上应该运行的进程为:

[root@spark-slave1 ~]# jps
17061 NodeManager
16951 DataNode

这样子hadoop就算是安装成功了.

hdfs命令

至此hadoop安装完成,我们可以用hdfs命令查看当前文件夹
[root@spark-master ~]# hadoop fs -mkdir /test
[root@spark-master ~]# hadoop fs -ls /
Found 2 items
drwxr-xr-x - root supergroup 0 2018-06-23 15:11 /test
drwxr-xr-x - root supergroup 0 2018-06-23 15:08 /usr

常用命令:

  1. hadoop fs -ls path //列出目录下文件
    1. hadoop fs -mkdir path //创建目录
    2. hadoop fs -rm path //删除文件或者目录
    3. hadoop fs -rmdir path //删除目录
    4. hadoop fs -put localfile path //将文件从本地上传到HDFS的指定文件夹中
    5. hadoop fs -cat filename //查看指定的文件内容
    6. hadoop fs -get HDFSfile localfilename //将HDFS文件下载到本地,命名为localfilename
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Hadoop之机架感知 下一篇(hadoop运维 二) 避免hadoop节点..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目