首页 > 代码库 > 5 hbase-shell +

5 hbase-shell +

本博文的主要内容是:

   .单节点集群的hbase安装

     .单节点集群的hbase的配置

   .HBase环境搭建60010端口无法访问问题解决方案

    -------------   注意 HBase1.X版本之后,没60010了。       ------------- 

 

 

HDFS的版本,不同,HBase里的内部也不一样。

技术分享

 

http://hbase.apache.org/

技术分享

 

 

 

https://issues.apache.org/jira/browse/HBASE/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel

技术分享

 

 

搭建HBase集群

1、  hbase-0.96.2-hadoop2-bin.tar.gz压缩包的上传

 技术分享

技术分享

技术分享

技术分享

 

 

sftp> cd /home/hadoop/app

sftp> put c:/hbase-0.96.2-hadoop2-bin.tar.gz

Uploading hbase-0.96.2-hadoop2-bin.tar.gz to /home/hadoop/app/hbase-0.96.2-hadoop2-bin.tar.gz

  100% 77507KB  19376KB/s 00:00:04    

c:/hbase-0.96.2-hadoop2-bin.tar.gz: 79367504 bytes transferred in 4 seconds (19376 KB/s)

sftp>

 

 技术分享

或者,通过

 

技术分享

这里不多赘述。

 

 

2、 hbase-0.96.2-hadoop2-bin.tar.gz压缩包的解压

技术分享

[hadoop@weekend110 app]$ ls

hadoop-2.4.1  hbase-0.96.2-hadoop2-bin.tar.gz  hive-0.12.0  jdk1.7.0_65  zookeeper-3.4.6

[hadoop@weekend110 app]$ ll

total 77524

drwxr-xr-x. 11 hadoop hadoop     4096 Jul 18 20:11 hadoop-2.4.1

-rw-r--r--.  1 root   root   79367504 May 20 13:51 hbase-0.96.2-hadoop2-bin.tar.gz

drwxrwxr-x. 10 hadoop hadoop     4096 Oct 10 21:30 hive-0.12.0

drwxr-xr-x.  8 hadoop hadoop     4096 Jun 17  2014 jdk1.7.0_65

drwxr-xr-x. 10 hadoop hadoop     4096 Jul 30 10:28 zookeeper-3.4.6

[hadoop@weekend110 app]$ tar -zxvf hbase-0.96.2-hadoop2-bin.tar.gz

 

 

3、删除压缩包hbase-0.96.2-hadoop2-bin.tar.gz

技术分享

 

 

4、将HBase文件权限赋予给hadoop用户,这一步,不需。

5、HBase的配置

技术分享

[hadoop@weekend110 app]$ ls

hadoop-2.4.1  hbase-0.96.2-hadoop2  hive-0.12.0  jdk1.7.0_65  zookeeper-3.4.6

[hadoop@weekend110 app]$ cd hbase-0.96.2-hadoop2/

[hadoop@weekend110 hbase-0.96.2-hadoop2]$ ll

total 436

drwxr-xr-x.  4 hadoop hadoop   4096 Mar 25  2014 bin

-rw-r--r--.  1 hadoop hadoop 403242 Mar 25  2014 CHANGES.txt

drwxr-xr-x.  2 hadoop hadoop   4096 Mar 25  2014 conf

drwxr-xr-x. 27 hadoop hadoop   4096 Mar 25  2014 docs

drwxr-xr-x.  7 hadoop hadoop   4096 Mar 25  2014 hbase-webapps

drwxrwxr-x.  3 hadoop hadoop   4096 Oct 11 17:49 lib

-rw-r--r--.  1 hadoop hadoop  11358 Mar 25  2014 LICENSE.txt

-rw-r--r--.  1 hadoop hadoop    897 Mar 25  2014 NOTICE.txt

-rw-r--r--.  1 hadoop hadoop   1377 Mar 25  2014 README.txt

[hadoop@weekend110 hbase-0.96.2-hadoop2]$ cd conf/

[hadoop@weekend110 conf]$ ls

hadoop-metrics2-hbase.properties  hbase-env.cmd  hbase-env.sh  hbase-policy.xml  hbase-site.xml  log4j.properties  regionservers

[hadoop@weekend110 conf]$

 

 

对于,多节点里,安装HBase,这里不多说了。

 

技术分享

1.上传hbase安装包

 

2.解压

 

3.配置hbase集群,要修改3个文件(首先zk集群已经安装好了)

         注意:要把hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下

        

         3.1修改hbase-env.sh

         export JAVA_HOME=/usr/java/jdk1.7.0_55

         //告诉hbase使用外部的zk

         export HBASE_MANAGES_ZK=false

        

         vim hbase-site.xml

         <configuration>

                   <!-- 指定hbase在HDFS上存储的路径 -->

        <property>

                <name>hbase.rootdir</name>

                <value>hdfs://ns1/hbase</value>

        </property>

                   <!-- 指定hbase是分布式的 -->

        <property>

                <name>hbase.cluster.distributed</name>

                <value>true</value>

        </property>

                   <!-- 指定zk的地址,多个用“,”分割 -->

        <property>

                <name>hbase.zookeeper.quorum</name>

                <value>weekend04:2181,weekend05:2181,weekend06:2181</value>

        </property>

         </configuration>

        

         vim regionservers

         weekend03

         weekend04

         weekend05

         weekend06

        

         3.2拷贝hbase到其他节点

                   scp -r /weekend/hbase-0.96.2-hadoop2/ weekend02:/weekend/

                   scp -r /weekend/hbase-0.96.2-hadoop2/ weekend03:/weekend/

                   scp -r /weekend/hbase-0.96.2-hadoop2/ weekend04:/weekend/

                   scp -r /weekend/hbase-0.96.2-hadoop2/ weekend05:/weekend/

                   scp -r /weekend/hbase-0.96.2-hadoop2/ weekend06:/weekend/

4.将配置好的HBase拷贝到每一个节点并同步时间。

 

5.启动所有的hbase

         分别启动zk

                   ./zkServer.sh start

         启动hbase集群

                   start-dfs.sh

         启动hbase,在主节点上运行:

                   start-hbase.sh

6.通过浏览器访问hbase管理页面

         192.168.1.201:60010

7.为保证集群的可靠性,要启动多个HMaster

         hbase-daemon.sh start master

         

 

 

我这里,因,考虑到自己玩玩,单节点集群里安装HBase。

hbase-env.sh

技术分享

[hadoop@weekend110 conf]$ ls

hadoop-metrics2-hbase.properties  hbase-env.cmd  hbase-env.sh  hbase-policy.xml  hbase-site.xml  log4j.properties  regionservers

[hadoop@weekend110 conf]$ vim hbase-env.sh

 

技术分享

/home/hadoop/app/jdk1.7.0_65

 

单节点的hbase-env.sh,需要修改2处。

技术分享

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_65 

 

 技术分享

技术分享

export HBASE_MANAGES_ZK=false

 

 技术分享

hbase-site.xml

 技术分享

[hadoop@weekend110 conf]$ ls

hadoop-metrics2-hbase.properties  hbase-env.cmd  hbase-env.sh  hbase-policy.xml  hbase-site.xml  log4j.properties  regionservers

[hadoop@weekend110 conf]$ vim hbase-site.xml

 

技术分享

<configuration>

        <property>

                <name>hbase.zookeeper.property.dataDir</name>

                <value>/home/hadoop/data/zookeeper/zkdata</value>

        </property>

        <property>

                <name>hbase.tmp.dir</name>

                <value>/home/hadoop/data/tmp/hbase</value>

        </property>

        <property>

                <name>hbase.zookeeper.property.clientPort</name>

                <value>2181</value>

        </property>

        <property>

                <name>hbase.rootdir</name>

                <value>hdfs://weekend110:9000/hbase</value>

        </property>

        <property>

                <name>hbase.cluster.distributed</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.replication</name>

                <value>1</value>

        </property>

</configuration>

 

 

新建目录

/home/hadoop/data/zookeeper/zkdata

/home/hadoop/data/tmp/hbase

技术分享

[hadoop@weekend110 conf]$ pwd

/home/hadoop/app/hbase-0.96.2-hadoop2/conf

[hadoop@weekend110 conf]$ mkdir -p /home/hadoop/data/zookeeper/zkdata

[hadoop@weekend110 conf]$ mkdir -p /home/hadoop/data/tmp/hbase

[hadoop@weekend110 conf]$

 

 

 

regionservers

技术分享

技术分享

weekend110

 

技术分享

[hadoop@weekend110 conf]$ ls

hadoop-metrics2-hbase.properties  hbase-env.cmd  hbase-env.sh  hbase-policy.xml  hbase-site.xml  log4j.properties  regionservers

[hadoop@weekend110 conf]$ cp /home/hadoop/app/hadoop-2.4.1/etc/hadoop/{core-site.xml,hdfs-site.xml} ./

[hadoop@weekend110 conf]$ ls

core-site.xml                     hbase-env.cmd  hbase-policy.xml  hdfs-site.xml     regionservers

hadoop-metrics2-hbase.properties  hbase-env.sh   hbase-site.xml    log4j.properties

[hadoop@weekend110 conf]$

 

 

vi /etc/profile

技术分享

[hadoop@weekend110 conf]$ su root

Password:

[root@weekend110 conf]# vim /etc/profile

 

 

技术分享

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_65

export HADOOP_HOME=/home/hadoop/app/hadoop-2.4.1

export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.6

export HIVE_HOME=/home/hadoop/app/hive-0.12.0

export HBASE_HOME=/home/hadoop/app/hbase-0.96.2-hadoop2

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin

 

技术分享

[root@weekend110 conf]# source /etc/profile

[root@weekend110 conf]# su hadoop

 

 

技术分享

 

启动单节点集群的hbase

由于伪分布模式的运行基于HDFS,因此在运行HBase之前首先需要启动HDFS,

 

 技术分享

[hadoop@weekend110 hadoop-2.4.1]$ jps

5802 Jps

[hadoop@weekend110 hadoop-2.4.1]$ sbin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

Starting namenodes on [weekend110]

weekend110: starting namenode, logging to /home/hadoop/app/hadoop-2.4.1/logs/hadoop-hadoop-namenode-weekend110.out

weekend110: starting datanode, logging to /home/hadoop/app/hadoop-2.4.1/logs/hadoop-hadoop-datanode-weekend110.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-weekend110.out

starting yarn daemons

starting resourcemanager, logging to /home/hadoop/app/hadoop-2.4.1/logs/yarn-hadoop-resourcemanager-weekend110.out

weekend110: starting nodemanager, logging to /home/hadoop/app/hadoop-2.4.1/logs/yarn-hadoop-nodemanager-weekend110.out

[hadoop@weekend110 hadoop-2.4.1]$ jps

6022 DataNode

6149 SecondaryNameNode

5928 NameNode

6287 ResourceManager

6426 Jps

6387 NodeManager

[hadoop@weekend110 hadoop-2.4.1]$

 

 

 技术分享

[hadoop@weekend110 hbase-0.96.2-hadoop2]$ pwd

/home/hadoop/app/hbase-0.96.2-hadoop2

[hadoop@weekend110 hbase-0.96.2-hadoop2]$ ls

bin  CHANGES.txt  conf  docs  hbase-webapps  lib  LICENSE.txt  NOTICE.txt  README.txt

[hadoop@weekend110 hbase-0.96.2-hadoop2]$ cd bin

[hadoop@weekend110 bin]$ ls

get-active-master.rb  hbase-common.sh   hbase-jruby             region_mover.rb     start-hbase.cmd  thread-pool.rb

graceful_stop.sh      hbase-config.cmd  hirb.rb                 regionservers.sh    start-hbase.sh   zookeepers.sh

hbase                 hbase-config.sh   local-master-backup.sh  region_status.rb    stop-hbase.cmd

hbase-cleanup.sh      hbase-daemon.sh   local-regionservers.sh  replication         stop-hbase.sh

hbase.cmd             hbase-daemons.sh  master-backup.sh        rolling-restart.sh  test

[hadoop@weekend110 bin]$ ./start-hbase.sh

starting master, logging to /home/hadoop/app/hbase-0.96.2-hadoop2/logs/hbase-hadoop-master-weekend110.out

[hadoop@weekend110 bin]$ jps

6022 DataNode

6149 SecondaryNameNode

5928 NameNode

6707 Jps

6287 ResourceManager

6530 HMaster

6387 NodeManager

[hadoop@weekend110 bin]$

 

 

 

http://weekend110:60010/

技术分享

参考博客:http://blog.csdn.net/u013575812/article/details/46919011

技术分享

 

 

[hadoop@weekend110 bin]$ pwd

/home/hadoop/app/hbase-0.96.2-hadoop2/bin

[hadoop@weekend110 bin]$ hadoop dfsadmin -safemode leave

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

 

Safe mode is OFF

[hadoop@weekend110 bin]$ jps

6022 DataNode

7135 Jps

6149 SecondaryNameNode

5928 NameNode

6287 ResourceManager

6387 NodeManager

[hadoop@weekend110 bin]$ ./start-hbase.sh

starting master, logging to /home/hadoop/app/hbase-0.96.2-hadoop2/logs/hbase-hadoop-master-weekend110.out

[hadoop@weekend110 bin]$ jps

6022 DataNode

7245 HMaster

6149 SecondaryNameNode

5928 NameNode

6287 ResourceManager

6387 NodeManager

7386 Jps

[hadoop@weekend110 bin]$

 

 技术分享

依旧如此,继续...解决!

 

5 hbase-shell +