[hadoop3@master ~]$ hbase shell
SLF4J:Class path contains multiple SLF4J bindings.
SLF4J:Found binding in [jar:file:/home/hadoop3/app/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J:Found binding in [jar:file:/home/hadoop3/app/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J:Seehttp://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J:Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBaseShellUse"help" to get list of supported commands.
Use"exit" to quit this interactive shell.
Version2.0.1, r987f7b6d37c2fcacc942cc66e5c5122aba8fdfbe, WedJun1312:03:55PDT2018Took0.0017 seconds
hbase(main):001:0> list
TABLE0 row(s)
Took0.9464 seconds
=> []
hbase(main):002:0> create 'blog','article','author'Created table blog
Took2.6421 seconds
=> Hbase::Table - blog
hbase(main):003:0> list
TABLE
blog
1 row(s)
Took0.0444 seconds
=> ["blog"]
hbase(main):004:0> pub 'blog','1','article:title','Head First HBase'NoMethodError: undefined method `pub' for main:Object
Did you mean public
public_send
put
hbase(main):005:0> put 'blog','1','article:title','HeadFirstHBase'
Took 0.6292 seconds
hbase(main):006:0> put 'blog','1','article:content','HBase is the Hadoop database.Use it when you need random,realtime read/write access to your big Data.'
Took 0.0151 seconds
hbase(main):007:0> put 'blog' '1','article:tags','Hadoop,HBase,NoSQL'
ERROR: wrong number of arguments (3 for 4)
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:
hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}
The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:
hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
Took 0.0309 seconds
hbase(main):008:0> put 'blog', '1','article:tags','Hadoop,HBase,NoSQL'
Took 0.0162 seconds
hbase(main):009:0> put 'blog','1','author:name','hujinjun'
Took 0.0102 seconds
hbase(main):010:0> put 'blog','1','author:nickname','hsg77'
Took 0.0172 seconds
hbase(main):011:0> get 'blog','1'
COLUMN CELL
article:content timestamp=1531741615500, value=HBase is the Hadoop database.Use it when you need random,realtime read/write access to your big Data.
article:tags timestamp=1531741665658, value=Hadoop,HBase,NoSQL
article:title timestamp=1531741534328, value=Head First HBase
author:name timestamp=1531741686996, value=hujinjun
author:nickname timestamp=1531741719123, value=hsg77
1 row(s)
Took 0.1446 seconds
hbase(main):012:0> get 'blog','1','author'
COLUMN CELL
author:name timestamp=1531741686996, value=hujinjun
author:nickname timestamp=1531741719123, value=hsg77
1 row(s)
Took 0.0178 seconds
hbase(main):013:0> scan 'blog'
ROW COLUMN+CELL
1 column=article:content, timestamp=1531741615500, value=HBase is the Hadoop database.Use it when you need random,realtime read/write access
to your big Data.
1 column=article:tags, timestamp=1531741665658, value=Hadoop,HBase,NoSQL
1 column=article:title, timestamp=1531741534328, value=Head First HBase
1 column=author:name, timestamp=1531741686996, value=hujinjun
1 column=author:nickname, timestamp=1531741719123, value=hsg77
1 row(s)
Took 0.0395 seconds
hbase(main):014:0> get 'blog','1','author:nickname'
COLUMN CELL
author:nickname timestamp=1531741719123, value=hsg77
1 row(s)
Took 0.0244 seconds
hbase(main):015:0> put 'blog','1','author:nickname','yedu'
Took 0.0194 seconds
hbase(main):016:0> get 'blog','1','author:nickname'
COLUMN CELL
author:nickname timestamp=1531741907981, value=yedu
1 row(s)
Took 0.0105 seconds
hbase(main):017:0> get 'blog','1',{COLUMN=>'author:nickname',VERSIONS=>2}
COLUMN CELL
author:nickname timestamp=1531741907981, value=yedu
1 row(s)
Took 0.0199 seconds
hbase(main):018:0> delete 'blog','1','author:nickname'
Took 0.0667 seconds
hbase(main):019:0> scan 'blog'
ROW COLUMN+CELL
1 column=article:content, timestamp=1531741615500, value=HBase is the Hadoop database.Use it when you need random,realtime read/write access
to your big Data.
1 column=article:tags, timestamp=1531741665658, value=Hadoop,HBase,NoSQL
1 column=article:title, timestamp=1531741534328, value=Head First HBase
1 column=author:name, timestamp=1531741686996, value=hujinjun
1 column=author:nickname, timestamp=1531741719123, value=hsg77
1 row(s)
Took 0.0140 seconds
hbase(main):020:0> deleteall 'blog','1'
Took 0.0506 seconds
hbase(main):021:0> scan blog
NameError: undefined local variable or method `blog'formain:Object
hbase(main):022:0> scan 'blog'ROWCOLUMN+CELL0 row(s)
Took0.0079 seconds
hbase(main):023:0> disable 'blog'Took1.3707 seconds
hbase(main):024:0> enable 'blog'Took1.3326 seconds
hbase(main):025:0> scan 'blog'ROWCOLUMN+CELL0 row(s)
Took0.0631 seconds
hbase(main):026:0> drop 'blog'ERROR:Table blog is enabled. Disable it first.
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1'Took0.0239 seconds
hbase(main):027:0> disable 'blog'Took0.4536 seconds
hbase(main):028:0> scan 'blog'ROWCOLUMN+CELLERROR:Failed to create local dir /hbase/tmp/local/jars, DynamicClassLoader failed to init
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTHorCOLUMNS, CACHEorRAW, VERSIONS, ALL_METRICSorMETRICSIf no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in'col_family'.
The filter can be specified in two ways:1. Using a filterString - more information on this is available in the
FilterLanguage document attached to the HBASE-4176JIRA2. Using the entire package name of the filter.
If you wish to see metrics regarding the execution of the scan, the
ALL_METRICS boolean should be set to true. Alternatively, if you would
prefer to see only a subset of the metrics, the METRICS array can be
defined to include the names of only the metrics you care about.
Someexamples:
hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804000, 1303668904000]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ALL_METRICS => true}
hbase> scan 't1', {METRICS => ['RPC_RETRIES', 'ROWS_FILTERED']}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the OperationAttributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Alsofor experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example:
hbase> scan 't1', {RAW => true, VERSIONS => 10}
Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. TheFORMATTER can be stipulated:1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom classfollowedbymethodname: e.g. 'c(MyFormatterClass).format'.Example formatting cf:qualifier1 andcf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
Note that you can specify a FORMATTER by column only (cf:qualifier). You can set a
formatter for all columns (including, all key parts) using the "FORMATTER"and"FORMATTER_CLASS" options. The default "FORMATTER_CLASS" is
"org.apache.hadoop.hbase.util.Bytes".
hbase> scan 't1', {FORMATTER => 'toString'}
hbase> scan 't1', {FORMATTER_CLASS => 'org.apache.hadoop.hbase.util.Bytes', FORMATTER => 'toString'}
Scan can also be used directly from a table, by first getting a reference to a
table, like such:
hbase> t = get_table 't'
hbase> t.scan
Notein the above situation, you can still provide all the filtering, columns,
options, etc as described above.
Took0.0747 seconds
hbase(main):029:0> list
TABLE
blog
1 row(s)
Took0.0145 seconds
=> ["blog"]
hbase(main):030:0> drop 'blog'Took0.4782 seconds
hbase(main):031:0> list
TABLE0 row(s)
Took0.0058 seconds
=> []
hbase(main):032:0>