各位,请问hbase与hdfs的关系中报hdfs租约过期,该怎么处理

17:04 提问
hbase导出表数据到hdfs
我需要把hbase中的表数据导入到hdfs
使用的命令 hbase org.apache.hadoop.hbase.mapreduce.Driver import user hdfs://master:9000/user
显示一直重新连接。连接九次后停住不到,已经被这个问题弄疯了
能解答吗各位
报错的信息是:
00:43:32,293 INFO
[main] ipc.Client: Retrying connect to server: localhost/127.0.0.1:18032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
00:43:33,295 INFO
[main] ipc.Client: Retrying connect to server: localhost/127.0.0.1:18032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
00:43:34,297 INFO
[main] ipc.Client: Retrying connect to server: localhost/127.0.0.1:18032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
00:43:35,299 INFO
[main] ipc.Client: Retrying connect to server: localhost/127.0.0.1:18032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
按赞数排序
请问这个问题您解决了吗?
其他相关推荐HBase之读取HBase数据写入HDFS - HBase入门基础教程_服务器应用_Linux公社-Linux系统门户网站
你好,游客
HBase入门基础教程
HBase之读取HBase数据写入HDFS
来源:Linux社区&
作者:andie_guo
本blog介绍如何读取Hbase中的数据并写入到HDFS分布式文件系统中。读取数据比较简单,我们借用上一篇的hbase数据输出wordcount表作为本篇数据源的输入,编写Mapper函数,读取wordcount表中的数据填充到& key,value&,通过Reduce函数直接输出得到的结果即可。
硬件环境: 6.5 服务器4台(一台为Master节点,三台为Slave节点) 软件环境:Java 1.7.0_45、Eclipse Juno Service Release 2、-1.2.1、hbase-0.94.20。
1、 输入与输出
1)输入数据源:
上一篇实现了读取MapReduce数据写入到Hbase表wordcount中,在本篇blog中,我们将wordcount表作为输入数据源。
2)输出目标:
HDFS分布式文件系统中的文件。
2、 Mapper函数实现
WordCountHbaseReaderMapper类继承了TableMapper& Text,Text&抽象类,TableMapper类专门用于完成MapReduce中Map过程与Hbase表之间的操作。此时的map(ImmutableBytesWritable key,Result value,Context context)方法,第一个参数key为Hbase表的rowkey主键,第二个参数value为key主键对应的记录集合,此处的map核心实现是遍历key主键对应的记录集合value,将其组合成一条记录通过contentx.write(key,value)填充到& key,value&键值对中。 详细源码请参考:WordCountHbaseReader\src\com\zonesion\hbase\WordCountHbaseReader.java
public static class WordCountHbaseReaderMapper extends
TableMapper&Text,Text&{
protected void map(ImmutableBytesWritable key,Result value,Context context)
throws IOException, InterruptedException {
StringBuffer sb = new StringBuffer("");
for(Entry&byte[],byte[]& entry:value.getFamilyMap("content".getBytes()).entrySet()){
String str =
new String(entry.getValue());
//将字节数组转换为String类型
if(str != null){
sb.append(new String(entry.getKey()));
sb.append(":");
sb.append(str);
context.write(new Text(key.get()), new Text(new String(sb)));
3、 Reducer函数实现
此处的WordCountHbaseReaderReduce实现了直接输出Map输出的& key,value&键值对,没有对其做任何处理。详细源码请参考:WordCountHbaseReader\src\com\zonesion\hbase\WordCountHbaseReader.java
public static class WordCountHbaseReaderReduce extends Reducer&Text,Text,Text,Text&{
private Text result = new Text();
protected void reduce(Text key, Iterable&Text& values,Context context)
throws IOException, InterruptedException {
for(Text val:values){
result.set(val);
context.write(key, result);
4、 驱动函数实现
与WordCount的驱动类不同,在Job配置的时候没有配置job.setMapperClass(),而是用以下方法执行Mapper类: TableMapReduceUtil.initTableMapperJob(tablename,scan,WordCountHbaseReaderMapper.class, Text.class, Text.class, job); 该方法指明了在执行job的Map过程时,数据输入源是hbase的tablename表,通过扫描读入对象scan对表进行全表扫描,为Map过程提供数据源输入,通过WordCountHbaseReaderMapper.class执行Map过程,Map过程的输出key/value类型是 Text.class与Text.class,最后一个参数是作业对象。特别注意:这里声明的是一个最简单的扫描读入对象scan,进行表扫描读取数据,其中scan可以配置参数,这里为了例子简单不再详述,用户可自行尝试。 详细源码请参考:WordCountHbaseReader\src\com\zonesion\hbase\WordCountHbaseReader.java
public static void main(String[] args) throws Exception {
String tablename = "wordcount";
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "Master");
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 1) {
System.err.println("Usage: WordCountHbaseReader &out&");
System.exit(2);
Job job = new Job(conf, "WordCountHbaseReader");
job.setJarByClass(WordCountHbaseReader.class);
//设置任务数据的输出路径;
FileOutputFormat.setOutputPath(job, new Path(otherArgs[0]));
job.setReducerClass(WordCountHbaseReaderReduce.class);
Scan scan = new Scan();
TableMapReduceUtil.initTableMapperJob(tablename,scan,WordCountHbaseReaderMapper.class, Text.class, Text.class, job);
//调用job.waitForCompletion(true) 执行任务,执行成功后退出;
System.exit(job.waitForCompletion(true) ? 0 : 1);
5、部署运行
1)启动Hadoop集群和Hbase服务
[hadoop@K-Master ~]$ start-dfs.sh
#启动hadoop HDFS文件管理系统
[hadoop@K-Master ~]$ start-mapred.sh
#启动hadoop MapReduce分布式计算服务
[hadoop@K-Master ~]$ start-hbase.sh
#启动Hbase
[hadoop@K-Master ~]$ jps
22003 HMaster
10611 SecondaryNameNode
21938 HQuorumPeer
10709 JobTracker
22154 HRegionServer
20277 Main
10432 NameNode
2)部署源码
#设置工作环境
[hadoop@K-Master ~]$ mkdir -p /usr/hadoop/workspace/Hbase
将WordCountHbaseReader文件夹拷贝到/usr/hadoop/workspace/Hbase/ 路径下;
& 你可以直接 下载 WordCountHbaseReader
------------------------------------------分割线------------------------------------------
在 2015年\3月\HBase入门基础教程
下载方法见
------------------------------------------分割线------------------------------------------
3)修改配置文件
a)查看hbase核心配置文件hbase-site.xml的hbase.zookeeper.quorum属性
参考& 3、部署运行 3)修改配置文件&查看hbase核心配置文件hbase-site.xml的hbase.zookeeper.quorum属性;
b)修改项目WordCountHbaseWriter/src/config.properties属性文件
将项目WordCountHbaseWriter/src/config.properties属性文件的hbase.zookeeper.quorum属性值修改为上一步查询到的属性值,保持config.properties文件的hbase.zookeeper.quorum属性值与hbase-site.xml文件的hbase.zookeeper.quorum属性值一致;
#切换工作目录
[hadoop@K-Master ~]$ cd /usr/hadoop/workspace/Hbase/ WordCountHbaseReader
#修改属性值
[hadoop@K-Master WordCountHbaseReader]$ vim src/config.properties
hbase.zookeeper.quorum=K-Master
#拷贝src/config.properties文件到bin/文件夹
[hadoop@K-Master WordCountHbaseReader]$ cp src/config.properties bin/
4)编译文件
#切换工作目录
[hadoop@K-Master ~]$ cd /usr/hadoop/workspace/Hbase/WordCountHbaseReader
[hadoop@K-Master WordCountHbaseReader]$ javac -classpath /usr/hadoop/hadoop-core-1.2.1.jar:/usr/hadoop/lib/commons-cli-1.2.jar:lib/zookeeper-3.4.5.jar:lib/hbase-0.94.20.jar -d bin/ src/com/zonesion/hbase/WordCountHbaseReader.java
#查看编译文件
[hadoop@K-Master WordCountHbaseReader]$ ls bin/com/zonesion/hbase/ -la
drwxrwxr-x 2 hadoop hadoop 4096 Dec 29 10:36 .
drwxrwxr-x 3 hadoop hadoop 4096 Dec 29 10:36 ..
-rw-rw-r-- 1 hadoop hadoop 2166 Dec 29 14:31 WordCountHbaseReader.class
-rw-rw-r-- 1 hadoop hadoop 2460 Dec 29 14:31 WordCountHbaseReader$WordCountHbaseReaderMapper.class
-rw-rw-r-- 1 hadoop hadoop 1738 Dec 29 14:31 WordCountHbaseReader$WordCountHbaseReaderReduce.class
5)打包Jar文件
#拷贝lib文件夹到bin文件夹
[hadoop@K-Master WordCountHbaseReader]$ cp -r lib/ bin/
#打包Jar文件
[hadoop@K-Master WordCountHbaseReader]$ jar -cvf WordCountHbaseReader.jar -C bin/ .
added manifest
adding: lib/(in = 0) (out= 0)(stored 0%)
adding: lib/zookeeper-3.4.5.jar(in = 779974) (out= 721150)(deflated 7%)
adding: lib/guava-11.0.2.jar(in = 1648200) (out= 1465342)(deflated 11%)
adding: lib/protobuf-java-2.4.0a.jar(in = 449818) (out= 420864)(deflated 6%)
adding: lib/hbase-0.94.20.jar(in = 5475284) (out= 5038635)(deflated 7%)
adding: com/(in = 0) (out= 0)(stored 0%)
adding: com/zonesion/(in = 0) (out= 0)(stored 0%)
adding: com/zonesion/hbase/(in = 0) (out= 0)(stored 0%)
adding: com/zonesion/hbase/PropertiesHelper.class(in = 4480) (out= 1926)(deflated 57%)
adding: com/zonesion/hbase/WordCountHbaseReader.class(in = 2702) (out= 1226)(deflated 54%)
adding: com/zonesion/hbase/WordCountHbaseReader$WordCountHbaseReaderMapper.class(in = 3250) (out= 1275)(deflated 60%)
adding: com/zonesion/hbase/WordCountHbaseReader$WordCountHbaseReaderReduce.class(in = 2308) (out= 872)(deflated 62%)
adding: config.properties(in = 32) (out= 34)(deflated -6%)
6)运行实例
[hadoop@K-Master WordCountHbase]$ hadoop jar WordCountHbaseReader.jar WordCountHbaseReader /user/hadoop/WordCountHbaseReader/output/
...................省略.............
14/12/30 17:51:58 INFO mapred.JobClient: Running job: job__0035
14/12/30 17:51:59 INFO mapred.JobClient:
map 0% reduce 0%
14/12/30 17:52:13 INFO mapred.JobClient:
map 100% reduce 0%
14/12/30 17:52:26 INFO mapred.JobClient:
map 100% reduce 100%
14/12/30 17:52:27 INFO mapred.JobClient: Job complete: job__0035
14/12/30 17:52:27 INFO mapred.JobClient: Counters: 39
14/12/30 17:52:27 INFO mapred.JobClient:
Job Counters
14/12/30 17:52:27 INFO mapred.JobClient:
Launched reduce tasks=1
14/12/30 17:52:27 INFO mapred.JobClient:
SLOTS_MILLIS_MAPS=4913
14/12/30 17:52:27 INFO mapred.JobClient:
Total time spent by all reduces waiting after reserving slots (ms)=0
14/12/30 17:52:27 INFO mapred.JobClient:
Total time spent by all maps waiting after reserving slots (ms)=0
14/12/30 17:52:27 INFO mapred.JobClient:
Rack-local map tasks=1
14/12/30 17:52:27 INFO mapred.JobClient:
Launched map tasks=1
14/12/30 17:52:27 INFO mapred.JobClient:
SLOTS_MILLIS_REDUCES=13035
14/12/30 17:52:27 INFO mapred.JobClient:
HBase Counters
14/12/30 17:52:27 INFO mapred.JobClient:
REMOTE_RPC_CALLS=8
14/12/30 17:52:27 INFO mapred.JobClient:
RPC_CALLS=8
14/12/30 17:52:27 INFO mapred.JobClient:
RPC_RETRIES=0
14/12/30 17:52:27 INFO mapred.JobClient:
NOT_SERVING_REGION_EXCEPTION=0
14/12/30 17:52:27 INFO mapred.JobClient:
NUM_SCANNER_RESTARTS=0
14/12/30 17:52:27 INFO mapred.JobClient:
MILLIS_BETWEEN_NEXTS=9
14/12/30 17:52:27 INFO mapred.JobClient:
BYTES_IN_RESULTS=216
14/12/30 17:52:27 INFO mapred.JobClient:
BYTES_IN_REMOTE_RESULTS=216
14/12/30 17:52:27 INFO mapred.JobClient:
REGIONS_SCANNED=1
14/12/30 17:52:27 INFO mapred.JobClient:
REMOTE_RPC_RETRIES=0
14/12/30 17:52:27 INFO mapred.JobClient:
File Output Format Counters
14/12/30 17:52:27 INFO mapred.JobClient:
Bytes Written=76
14/12/30 17:52:27 INFO mapred.JobClient:
FileSystemCounters
14/12/30 17:52:27 INFO mapred.JobClient:
FILE_BYTES_READ=92
14/12/30 17:52:27 INFO mapred.JobClient:
HDFS_BYTES_READ=68
14/12/30 17:52:27 INFO mapred.JobClient:
FILE_BYTES_WRITTEN=159978
14/12/30 17:52:27 INFO mapred.JobClient:
HDFS_BYTES_WRITTEN=76
14/12/30 17:52:27 INFO mapred.JobClient:
File Input Format Counters
14/12/30 17:52:27 INFO mapred.JobClient:
Bytes Read=0
14/12/30 17:52:27 INFO mapred.JobClient:
Map-Reduce Framework
14/12/30 17:52:27 INFO mapred.JobClient:
Map output materialized bytes=92
14/12/30 17:52:27 INFO mapred.JobClient:
Map input records=5
14/12/30 17:52:27 INFO mapred.JobClient:
Reduce shuffle bytes=92
14/12/30 17:52:27 INFO mapred.JobClient:
Spilled Records=10
14/12/30 17:52:27 INFO mapred.JobClient:
Map output bytes=76
14/12/30 17:52:27 INFO mapred.JobClient:
Total committed heap usage (bytes)=
14/12/30 17:52:27 INFO mapred.JobClient:
CPU time spent (ms)=2160
14/12/30 17:52:27 INFO mapred.JobClient:
Combine input records=0
14/12/30 17:52:27 INFO mapred.JobClient:
SPLIT_RAW_BYTES=68
14/12/30 17:52:27 INFO mapred.JobClient:
Reduce input records=5
14/12/30 17:52:27 INFO mapred.JobClient:
Reduce input groups=5
14/12/30 17:52:27 INFO mapred.JobClient:
Combine output records=0
14/12/30 17:52:27 INFO mapred.JobClient:
Physical memory (bytes) snapshot=
14/12/30 17:52:27 INFO mapred.JobClient:
Reduce output records=5
14/12/30 17:52:27 INFO mapred.JobClient:
Virtual memory (bytes) snapshot=
14/12/30 17:52:27 INFO mapred.JobClient:
Map output records=5
7)查看运行结果
[hadoop@K-Master WordCountHbaseReader]$ hadoop fs
-ls /user/hadoop/WordCountHbaseReader/output/
Found 3 items
-rw-r--r--
1 hadoop supergroup
18:04 /user/hadoop/WordCountHbaseReader/output/_SUCCESS
drwxr-xr-x
- hadoop supergroup
18:04 /user/hadoop/WordCountHbaseReader/output/_logs
-rw-r--r--
1 hadoop supergroup
18:04 /user/hadoop/WordCountHbaseReader/output/part-r-00000
[hadoop@K-Master WordCountHbaseReader]$ hadoop fs -cat /user/hadoop/WordCountHbaseReader/output/part-r-00000
Bye count:1
Goodbye count:1
Hadoope count:2
Hellope count:2
Worldpe count:2
HBase 的详细介绍:HBase 的下载地址:
本文永久更新链接地址:7
【内容导航】
相关资讯 & & &
   同意评论声明
   发表
尊重网上道德,遵守中华人民共和国的各项有关法律法规
承担一切因您的行为而直接或间接导致的民事或刑事法律责任
本站管理人员有权保留或删除其管辖留言中的任意内容
本站有权在网站内转载或引用您的评论
参与本评论即表明您已经阅读并接受上述条款博客分类:
bulk-load的作用是用mapreduce的方式将hdfs上的文件装载到hbase中,对于海量数据装载入hbase非常有用,参考http://hbase.apache.org/docs/r0.89./bulk-loads.html:
hbase提供了现成的程序将hdfs上的文件导入hbase,即bulk-load方式。它包括两个步骤(也可以一次完成):
1 将文件包装成hfile,hadoop jar /path/to/hbase.jar importtsv -Dimporttsv.columns=a,b,c &tablename& &inputdir&
比如:
hadoop dfs -cat test/1
hadoop jar ~/hbase/hbase-0.90.2.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,f1 t8 test
将会启动mapreduce程序在hdfs上生成t8这张表,它的rowkey分别为1 3 5 7,对应的value为2 4 6 8
注意,源文件默认以"\t"为分割符,如果需要换成其它分割符,在执行时加上-Dimporttsv.separator=",",则变成了以","分割
2 在上一步中,如果设置了输出目录,如
hadoop jar ~/hbase/hbase-0.90.2.jar importtsv -Dimporttsv.bulk.output=tmp -Dimporttsv.columns=HBASE_ROW_KEY,f1 t8 test
那么t8表还暂时不会生成,只是将hfile输出到tmp文件夹下,我们可以查看tmp:
hadoop dfs -du tmp
Found 3 items
hdfs://namenode:9000/user/test/tmp/_SUCCESS
hdfs://namenode:9000/user/test/tmp/_logs
hdfs://namenode:9000/user/test/tmp/f1
然后执行hadoop jar hbase-VERSION.jar completebulkload /user/todd/myoutput mytable将这个输出目录中的hfile转移到对应的region中,这一步因为只是mv,所以相当快。如:
hadoop jar ~/hbase/hbase-0.90.2.jar completebulkload tmp t8
hadoop dfs -du /hbase/t8/cdf809ade9428
Found 4 items
hdfs://namenode:9000/hbase/t8/cdf809ade9428/.oldlogs
hdfs://namenode:9000/hbase/t8/cdf809ade9428/.regioninfo
hdfs://namenode:9000/hbase/t8/cdf809ade9428/.tmp
hdfs://namenode:9000/hbase/t8/cdf809ade9428/f1
此时己经生成了表t8
注意,如果数据特别大,而表中原来就有region,那么会执行切分工作,查找数据对应的region并装载
&&&&&&& 程序使用中注意:
1 因为是执行hadoop程序,不会自动查找hbase的config路径,也就找不到hbase的环境变量。因此需要将hbase-site.xml加入到hadoop-conf变量中
2 还需要将hbase/lib中的jar包放入classpath中
3 执行以上的步骤2时需要将zookeeper的配置写入core-site.xml中,因为在那一步时甚至不会读取hbase-site.xml,否则会连不上zookeeper
浏览 20188
请问一下,我在生成hfile时没有遇到问题,但是当load hfile到hbase的时候就过不去,会不会是您说的注意中的第三点啊,我在添加上再测试还是一样,我的hadoop有四台机器,一个master三个slave,同时master充当hmaster,没有zookeeper,三个slave充当regionserver都有zookeeper,zookeeper是用hbase自带的我运行提示如下错误ERROR mapreduce.LoadIncrementalHFiles: Encountered unrecoverable error from region serverorg.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions:Sat Apr 13 10:02:27 CST 2013, org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@34b23d12, java.net.SocketTimeoutException: Call to slave01/192.168.1.11:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.1.10:60938 remote=slave01/192.168.1.11:60020]Sat Apr 13 10:12:12 CST 2013, org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@34b23d12, java.net.SocketTimeoutException: Call to slave01/192.168.1.11:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.1.10:60974 remote=slave01/192.168.1.11:60020]&&&&&&& at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:183)&&&&&&& at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.tryAtomicRegionLoad(LoadIncrementalHFiles.java:491)&&&&&&& at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:279)&&&&&&& at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:277)&&&&&&& at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)&&&&&&& at java.util.concurrent.FutureTask.run(FutureTask.java:138)&&&&&&& at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)&&&&&&& at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)&&&&&&& at java.lang.Thread.run(Thread.java:662)whlamost@master:/usr/local/hadoop/bin$ ERROR mapreduce.LoadIncrementalHFiles: Encountered unrecoverable error from region serverERROR: command not found
你好,看你的博客上对hadoop有系统的了解,我这个有问题 不知道楼主有没有遇到过,我使用sqoop导oracle数据到hbase。./sqoop import --connect jdbc:oracle:thin:@192.168.8.131:1521:dcsh --username User_data2 --password yhdtest123qa --query "select * from so_ext t where \$CONDITIONS" -m 4 --hbase-create-table --hbase-table hso --column-family so --hbase-row-key id --split-by id12/05/28 11:18:20 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 161.2344 seconds (0 bytes/sec)12/05/28 11:18:20 INFO mapreduce.ImportJobBase: Retrieved 5011540 records.一切都顺利,登录到hbase中查看,表已经生成了可是却没有数据hbase(main):028:0& scan 'hs'ROW&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& COLUMN+CELL&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& 0 row(s) in 0.0260 seconds提点建议吧 谢谢试一下,flush 'hs'
leeyok 写道您好呀,我最近才开始看hbase~按照本文的步骤一步步做下去,出现了以下的错误。11/10/31 09:20:16 INFO mapred.JobClient: Task Id : attempt__0002_m_, Status : FAILEDError: java.lang.ClassNotFoundException: mon.base.Splitter有点不理解为什么会缺少google的类呢?于是我下载了包含Splitter的包,放入$HADOOP_HOME/lib目录中,但是还是不识别。1 确认你下载的jar包上有这个类2 确认是每台机器的环境变量里都有这个jar包1.我下载的是包含com.google..mon.base.Splitter类的jar,这样的话splitter类还可以找到么?还是必须得是mon.base.Splitter呀?2. 我是在一台机器上伪分布进行的~
您好呀,我最近才开始看hbase~按照本文的步骤一步步做下去,出现了以下的错误。11/10/31 09:20:16 INFO mapred.JobClient: Task Id : attempt__0002_m_, Status : FAILEDError: java.lang.ClassNotFoundException: mon.base.Splitter有点不理解为什么会缺少google的类呢?于是我下载了包含Splitter的包,放入$HADOOP_HOME/lib目录中,但是还是不识别。1 确认你下载的jar包上有这个类2 确认是每台机器的环境变量里都有这个jar包
写道lc_koven 写道 写道3 执行以上的步骤2时需要将zookeeper的配置写入core-site.xml中, 如何配置?我的程序现在报错:Error: org/apache/zookeeper/Watcher11/06/22 17:45:22 INFO mapred.JobClient: Task Id : attempt__0008_m_, Status : FAILED是不是第3步zk没配置好??直接将hbase.zookeeper.quorum信息写入core-site.xml即可。从你这个错误log看不出更详细的原因如下是错误log片段:.... 12:08:52,810 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task__0001_m_000003 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,519 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000000, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000001, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Removed completed task 'attempt__0001_m_' from 'tracker_TJSJHL212-220.opi.com:TJSJHL212-220/127.0.0.1:39629'.......前面还有个错误信息:FATAL org.apache.hadoop.mapred.TaskTracker: Task: attempt__0002_m_ - Killed : org/apache/zookeeper/Watche&&&&&&& r 写道 写道lc_koven 写道 写道3 执行以上的步骤2时需要将zookeeper的配置写入core-site.xml中, 如何配置?我的程序现在报错:Error: org/apache/zookeeper/Watcher11/06/22 17:45:22 INFO mapred.JobClient: Task Id : attempt__0008_m_, Status : FAILED是不是第3步zk没配置好??直接将hbase.zookeeper.quorum信息写入core-site.xml即可。从你这个错误log看不出更详细的原因如下是错误log片段:.... 12:08:52,810 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task__0001_m_000003 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,519 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000000, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000001, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Removed completed task 'attempt__0001_m_' from 'tracker_TJSJHL212-220.opi.com:TJSJHL212-220/127.0.0.1:39629'.......前面还有个错误信息:FATAL org.apache.hadoop.mapred.TaskTracker: Task: attempt__0002_m_ - Killed : org/apache/zookeeper/Watche&&&&&&& r你写的都是INFO,不是出错的关键
lc_koven 写道 写道3 执行以上的步骤2时需要将zookeeper的配置写入core-site.xml中, 如何配置?我的程序现在报错:Error: org/apache/zookeeper/Watcher11/06/22 17:45:22 INFO mapred.JobClient: Task Id : attempt__0008_m_, Status : FAILED是不是第3步zk没配置好??直接将hbase.zookeeper.quorum信息写入core-site.xml即可。从你这个错误log看不出更详细的原因如下是错误log片段:.... 12:08:52,810 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task__0001_m_000003 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,519 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000000, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000001, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Removed completed task 'attempt__0001_m_' from 'tracker_TJSJHL212-220.opi.com:TJSJHL212-220/127.0.0.1:39629'.......前面还有个错误信息:FATAL org.apache.hadoop.mapred.TaskTracker: Task: attempt__0002_m_ - Killed : org/apache/zookeeper/Watche&&&&&&& r
写道3 执行以上的步骤2时需要将zookeeper的配置写入core-site.xml中, 如何配置?我的程序现在报错:Error: org/apache/zookeeper/Watcher11/06/22 17:45:22 INFO mapred.JobClient: Task Id : attempt__0008_m_, Status : FAILED是不是第3步zk没配置好??直接将hbase.zookeeper.quorum信息写入core-site.xml即可。从你这个错误log看不出更详细的原因如下是错误log片段:.... 12:08:52,810 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task__0001_m_000003 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,516 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt__0001_m_: Error: org/apache/zookeeper/Watcher 12:08:55,519 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000000, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt__0001_m_' to tip task__0001_m_000001, for tracker 'tracker_TJSJHL212-:TJSJHL212-220/127.0.0.1:39629' 12:08:58,526 INFO org.apache.hadoop.mapred.JobTracker: Removed completed task 'attempt__0001_m_' from 'tracker_TJSJHL212-220.opi.com:TJSJHL212-220/127.0.0.1:39629'.......
3 执行以上的步骤2时需要将zookeeper的配置写入core-site.xml中, 如何配置?我的程序现在报错:Error: org/apache/zookeeper/Watcher11/06/22 17:45:22 INFO mapred.JobClient: Task Id : attempt__0008_m_, Status : FAILED是不是第3步zk没配置好??直接将hbase.zookeeper.quorum信息写入core-site.xml即可。从你这个错误log看不出更详细的原因
浏览: 252811 次
来自: 杭州
1楼也好时髦呀
要是减少到300个 region,block就0.04s了。话 ...
请问lz,我从hbase0.94版本上的数据导入到0.96.1 ...
您好,我想请问一下,我执行了会发生OOM溢出的Deflater ...
你好:请教个问题,分配了16g内存,但是实际使用了22g你这里 ...}

我要回帖

更多关于 hbase hdfs 路径 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信