1.环境说明
hbase 完全分布式安装,2个节点,分别为:
192.168.1.67 MasterServer (作为hbase的master和regionserver节点[可选])
192.168.1.241 SlaveServer (作为hbase的regionserver节点)
2.先决条件
安装hadoop,具体安装见:http://blog.csdn.net/hwwn2009/article/details/39889465
安装zookeeper,因为要使用独立的zookeeper集群,具体安装见:http://blog.csdn.net/hwwn2009/article/details/40000881
3.安装Hbase
1)下载,注意要与hadoop版本兼容,且选择稳定版较好
wget http://mirrors.hust.edu.cn/apache/hbase/hbase-0.98.5/hbase-0.98.5-hadoop2-bin.tar.gz
2)解压
tar -zxvf hbase-0.98.5-hadoop2-bin.tar.gz
3)修改conf/hbase-site.xml文件
<property>
<name>hbase.rootdir</name>
<value>hdfs://MasterServer:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://MasterServer:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>MasterServer,SlaveServer</value>
</property>
4)修改conf/regionservers文件
MasterServer
SlaveServer
注:如果不想将MasterServer作为HRegionServer,就去掉
MasterServer
5)修改conf/hbase-env.sh文件
export JAVA_HOME=/usr/lib/jvm/jdk1.6/jdk1.6.0_27
export HBASE_MANAGES_ZK=false #启动指定的ZooKeeper,而非自带的ZooKeeper。
export HBASE_HOME=/home/hadooper/hadoop/hbase-0.98.5
export HADOOP_HOME=/home/hadooper/hadoop/hadoop-2.5.1
4.将配置好的hbase复制到其它节点
scp -r hbase-0.98.5 hadooper@SlaveServer:~/hadoop/
5.修改各节点Hadoop的hdfs-site.xml文件
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
注:该参数限制了datanode所允许同时执行的发送和接受任务的数量,缺省为256。
6.测试
启动顺序是:Hadoop->zookeeper->hbase;停止顺序:hbase->zookeeper->hadoop。
1)启动hbase
bin/start-hbase.sh
2)jps查看进程
①主节点MasterServer
8428 JobHistoryServer
4048 QuorumPeerMain
18234 Jps
15482 HMaster
30357 NameNode
15632 HRegionServer
30717 ResourceManager
30563 SecondaryNameNode
②从节点SlaveServer
9340 QuorumPeerMain
11991 HRegionServer
19375 DataNode
13706 Jps
19491 NodeManager
3)进入hbase shell
bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.98.5-hadoop2, rUnknown, Mon Aug 4 23:58:06 PDT 2014
4)查看集群状态
hbase(main):001:0> status
2 servers, 0 dead, 1.5000 average load
5)建表测试
hbase(main):002:0> create 'test','id'
0 row(s) in 1.3530 seconds
=> Hbase::Table - test
hbase(main):003:0> list
TABLE
member
test
2 row(s) in 0.0430 seconds
=> ["member", "test"]
6)网页查看集群状态
如果以上都没问题,恭喜,安装配置完成。
转载请注明:http://blog.csdn.net/hwwn2009/article/details/40015907
7.遇到的问题
1)Q:从节点jps时,看不到HRegionServer,且status时,显示只有一个servers:1 servers。即从节点的hbase没有启动起来。
A:查看log日志
2014-10-12 14:29:38,147 WARN [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase/rs/SlaveServer,60020,1413095376898 already deleted, retry=false
2014-10-12 14:29:38,147 WARN [regionserver60020] regionserver.HRegionServer: Failed deleting my ephemeral node
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/rs/SlaveServer,60020,1413095376898
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:156)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1273)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1262)
at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1298)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1012)
at java.lang.Thread.run(Thread.java:662)
2014-10-12 14:29:38,158 INFO [regionserver60020] zookeeper.ZooKeeper: Session: 0x249020a2cfd0014 closed
2014-10-12 14:29:38,158 INFO [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2014-10-12 14:29:38,158 INFO [regionserver60020] regionserver.HRegionServer: stopping server null; zookeeper connection closed.
2014-10-12 14:29:38,158 INFO [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
2014-10-12 14:29:38,158 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2422)
2014-10-12 14:29:38,160 INFO [Thread-9] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@8d5aad
2014-10-12 14:29:38,160 INFO [Thread-9] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2014-10-12 14:29:38,160 INFO [Thread-9] regionserver.ShutdownHook: Shutdown hook finished.
由于集群时间未同步,造成从节点没能启动。
在每个节点上运行ntp即可
ntpdate asia.pool.ntp.org
也可永久改变,见:http://jingyan.baidu.com/article/48206aeae2e919216ad6b334.html
2)Q:进入hbase shell后,提示
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
A:发生jar包冲突,删除hbase中的即可
rm lib/slf4j-log4j12-1.6.4.jar
转载请注明:http://blog.csdn.net/hwwn2009/article/details/40015907