设为首页 加入收藏

TOP

hdfs单机版的安装
2019-03-27 12:13:43 】 浏览:67
Tags:hdfs 单机版 安装

一、准备机器

机器编号 地址 端口

1 10.211.55.8 9000、50070、8088

二、安装

学习地址http://www.roncoo.com/course/view/5a057438cc2a4231a8c245695faea238

1、安装java环境

export JAVA_HOME=/data/program/software/java8

export JRE_HOME=/data/program/software/java8/jre

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib

export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

source /etc/profile使其生效

2、修改hostname

vi /etc/profile

添加10.211.55.8 bigdata2

3、关闭防火墙

service iptables stop

用久关闭防火墙:chkconfig iptables off

查看防火墙状态:service iptables status

4、添加hadoop用户和用户组

创建用户组:groupadd hadoop

新建hadoop用户并且增加到hadoop用户下:useradd –g hadoophadoop

设置密码:passwdhadoop

5、下载安装hadoop

cd /data/program/software

wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz

解压:tar -zxf hadoop-2.8.1.tar.gz

将hadoop-2.8.1操作权限赋给hadoop用户:chown–R hadoop:hadoop hadoop-2.8.1

6、创建数据目录

mkdir –p /data/dfs/name

mkdir –p /data/dfs/data

mkdir –p /data/tmp

将/data文件权限赋给hadoop:chown–R hadoop:hadoop /data

7、配置etc/hadoop/core-site.xml

cd /data/program/software/hadoop-2.8.1

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://bigdata2:9000</value>

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>file:/data/tmp</value>

<description>Abase for other temporary directories.</description>

</property>

<property>

<name>hadoop.proxyuser.hadoop.hosts</name>

<value>*</value>

</property>

<property>

<name>hadoop.proxyuser.hadoop.groups</name>

<value>*</value>

</property>

</configuration>

8、配置etc/hadoop/hdfs-site.xml

<configuration>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/data/dfs/name</value>

<description>Determineswhere on the local filesystem the DFS name node should store the name table. Ifthis isa comma-delimited list of directories then the name table isreplicatedin all of the directories, forredundancy. </description>

<final>true</final>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/data/dfs/data</value>

<description>Determineswhere on the local filesystem an DFS data node should store its blocks. If thisis a comma-delimited list of directories, then data will be stored inall nameddirectories, typically on different devices.Directories that do notexist areignored.

</description>

<final>true</final>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

9、配置etc/hadoop/mapred-site.ml

<xml version="1.0">

<xml-stylesheet type="text/xsl"href="configuration.xsl">

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

10、配置yarn-site.xml

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

11、配置slaves

bigdata2

12、设置hadoop环境变量

vi /etc/profile

HADOOP_HOME=/data/program/software/hadoop-2.8.1

PATH=$HADOOP_HOME/bin:$PATH

export HADOOP_HOME PATH

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

13、ssh无密码验证配置

切换到hadoop用户下:suhadoop

直接输入cd会切换到/home/hadoop根目录下:cd

创建.ssh目录:mkdir.ssh

生成秘钥(一直回车):ssh-keygen –t rsa

进入.ssh目录:cd.ssh

复制一份秘钥:cpid_rsa.pub authorized_keys

后退到根目录:cd..

给.ssh700权限:chmod700 .ssh

给.ssh里面的文件600权限:chmod 600 .ssh/*

ssh bigdata2

14、运行hadoop

先格式化一下namenode:bin/hadoop namenode –format

为了让大家看一下hadoop我们将所有的服务全部启动:sbin/start-all.sh

看一下启动的服务:jps

看一下hdfs的管理界面:http://10.211.55.8:50070

看hadoop运行任务:http://10.211.55.8:8088/cluster/nodes

15、测试

创建一个目录:bin/hadoop fs –mkdir /test

创建一个txt然后放到/test下:bin/hadoop fs –put /home/hadoop/first.txt /text

查看目录下文件:bin/hadoop fs –ls /test

启动过程中如果出现如下错误:则需要更改/data/program/software/hadoop-2.8.1/etc/hadoop/hadoop-env.sh中的 JAVA_HOME为绝对地址。

[hadoop@bigdata2 hadoop-2.8.1]$ sbin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

17/07/25 13:52:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

Starting namenodes on [bigdata2]

bigdata2: Error: JAVA_HOME is not set and could not be found.

The authenticity of host 'localhost (::1)' can't be established.

RSA key fingerprint is 24:e2:40:a1:fd:ac:68:46:fb:6b:6b:ac:94:ac:05:e3.

Are you sure you want to continue connecting (yes/no) bigdata2: Error: JAVA_HOME is not set and could not be found.

^Clocalhost: Host key verification failed.

Starting secondary namenodes [0.0.0.0]

0.0.0.0: Error: JAVA_HOME is not set and could not be found.

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇IO流操作HDFS 下一篇Hadoop2.8.1执行HDFS命令-appendT..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目