MS C6的如何打开msgex文件问题

起因:& & 昨天早上,公司突然断电。 来电后,手残,用root启动了集群,之后就关闭。 导致的问题:& && &&&第一阶段:&&HMaster 启动后自动退出。&&查了资料知道是权限的问题,进行修改,但是问题并没有解决。o(︶︿︶)o 唉 老老实实查看日志吧! 发现日志里有一段:& &
11:16:42,457 INFO org.apache.hadoop.hbase.master.SplitLogManager: found 0 orphan tasks and 0 rescan nodes
11:16:42,479 INFO org.apache.hadoop.hdfs.DFSClient: No node available for block: blk_-2 file=/hbase/hbase.version
11:16:42,479 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_-2 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
11:16:45,482 INFO org.apache.hadoop.hdfs.DFSClient: No node available for block: blk_-2 file=/hbase/hbase.version
11:16:45,482 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_-2 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
11:16:48,483 INFO org.apache.hadoop.hdfs.DFSClient: No node available for block: blk_-2 file=/hbase/hbase.version
11:16:48,484 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_-2 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
11:16:51,487 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_-2 file=/hbase/hbase.version
& && &&&at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2266)
& && &&&at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2060)
& && &&&at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2221)
& && &&&at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2149)
& && &&&at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
& && &&&at java.io.DataInputStream.readUTF(DataInputStream.java:589)
& && &&&at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:289)
& && &&&at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:327)
& && &&&at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:444)
& && &&&at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:148)
& && &&&at org.apache.hadoop.hbase.master.MasterFileSystem.&init&(MasterFileSystem.java:133)
& && &&&at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:549)
& && &&&at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:408)
& && &&&at java.lang.Thread.run(Thread.java:745)复制代码&&后来找到在&&.logs下有一个名字叫: Slave1.Hadoop,0214680-splitting 文件夹下的 日志文件,有问题!&&我就把这个文件给删了。 重启集群, 问题解决了! 但新的问题来了:& && && & 第二阶段:&&集群正常启动了, 但是在 页面上看到的结果是:
60010.png (25.75 KB, 下载次数: 3)
18:00 上传
赶紧查看集群进程, 也让俺郁闷啊![ ~]$ ./hadoopCtrl.sh list
3118 NameNode
4829 Jps
3533 QuorumPeerMain
3288 SecondaryNameNode
3380 JobTracker
3820 HMaster
======= Master.Hadoop&&==========
2896 DataNode
3000 TaskTracker
3627 Jps
3121 QuorumPeerMain
3230 HRegionServer
======= Slave1.Hadoop&&==========
2511 Jps
2186 HRegionServer
1968 TaskTracker
1874 DataNode
2105 QuorumPeerMain
======= Slave2.Hadoop&&==========
2067 DataNode
2522 Jps
2386 HRegionServer
2308 QuorumPeerMain
2172 TaskTracker
======= Slave3.Hadoop&&==========
复制代码
那我看看 Hbase Shell&&怎么样吧!&&结果:
hbase(main):002:0& list
TABLE
COLUMNSTABLE
PERSONALINFO
configtable
3 row(s) in 0.1230 seconds
hbase(main):002:0& scan 'configtable'
ROW& && && && && && && && && && && && && &&&COLUMN+CELL
ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for configtable,,99 after 7 tries.
Here is some help for this command:
S pass table name and optionally a dictionary of scanner
specifications.&&Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
or COLUMNS, CACHE
If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.
The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.
Some examples:
&&hbase& scan '.META.'
&&hbase& scan '.META.', {COLUMNS =& 'info:regioninfo'}
&&hbase& scan 't1', {COLUMNS =& ['c1', 'c2'], LIMIT =& 10, STARTROW =& 'xyz'}
&&hbase& scan 't1', {COLUMNS =& 'c1', TIMERANGE =& [, ]}
&&hbase& scan 't1', {FILTER =& &(PrefixFilter ('row2') AND (QualifierFilter (&=, 'binary:xyz'))) AND (TimestampsFilter ( 123, 456))&}
&&hbase& scan 't1', {FILTER =& org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false).&&By
default it is enabled.&&Examples:
&&hbase& scan 't1', {COLUMNS =& ['c1', 'c2'], CACHE_BLOCKS =& false}
Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default.&&Example:
&&hbase& scan 't1', {RAW =& true, VERSIONS =& 10}复制代码
& &发现 Hbase 集群启动了,但 怎么也不像一个集群啊! 继续看日志: 
17:00:38,737 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this process is
17:00:38,738 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server Slave1.Hadoop/192.168.1.3:2181. Will not attempt to authenticate using SASL (unknown error)
17:00:38,738 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to Slave1.Hadoop/192.168.1.3:2181, initiating session
17:00:38,755 WARN org.apache.zookeeper.ClientCnxnSocket: Connec r-o mode will be unavailable
17:00:38,755 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server Slave1.Hadoop/192.168.1.3:2181, sessionid = 0x24cb1fcf74e0004, negotiated timeout = 40000
17:00:39,178 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 267374 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:40,680 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 268876 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:42,182 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 270378 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:43,684 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 271880 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:45,186 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 273382 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:46,688 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 274884 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:48,190 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 276386 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:49,692 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 277888 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:51,194 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 279390 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:52,696 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 280892 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:54,199 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 282395 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:55,701 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 283897 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
17:00:57,203 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region ser currently checked in 0, slept for 285399 ms, expecting minimum of 1, maximum of , timeout of 4500 ms, interval of 1500 ms.
HMaster&&正常启动后,成了这个样子。。。。。。
复制代码 HRegionServer&&日志 是这个样子的:
16:10:03,292 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to Master server at localhost,2179738
16:11:03,326 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master. Retrying. Error was:
java.net.ConnectException: 拒绝连接
& && &&&at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
& && &&&at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
& && &&&at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
& && &&&at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:656)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
& && &&&at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
& && &&&at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source)
& && &&&at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:138)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208)
& && &&&at org.apache.hadoop.hbase.regionserver.HRegionServer.getMaster(HRegionServer.java:1995)
& && &&&at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2041)
& && &&&at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:736)
& && &&&at java.lang.Thread.run(Thread.java:745)
16:11:03,527 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to Master server at localhost,2179738
16:12:03,562 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master. Retrying. Error was:
org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is in the failed servers list: localhost/192.168.1.3:60000
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:425)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
& && &&&at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
& && &&&at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source)
& && &&&at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:138)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208)
& && &&&at org.apache.hadoop.hbase.regionserver.HRegionServer.getMaster(HRegionServer.java:1995)
& && &&&at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2041)
& && &&&at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:736)
& && &&&at java.lang.Thread.run(Thread.java:745)复制代码
试过恢复 .meta.&&然后行不通。
hbase hbck
1.重新修复hbase meta表
hbase hbck -fixMeta
2.重新将hbase meta表分给regionserver
hbase hbck -fixAssignments
但是一直 报提示:
&div align=&left&&15/04/13 18:13:31 INFO zookeeper.ZooKeeper: Initiating client connection, connec& && && && && && && && && && && && && && && && && && && && && && && && && && && && && & tString=192.168.1.4:.1.3:.1.5:2181 sessionTimeout=180000& && && && && && && && && && && && && && && && && && && && && && && && && && && && && &&&watcher=hconnection
15/04/13 18:13:31 INFO zookeeper.RecoverableZooKeeper: The identifier of this pr& && && && && && && && && && && && && && && && && && && && && && && && && && && && && & ocess is
15/04/13 18:13:31 INFO zookeeper.ClientCnxn: Opening socket connection to server& && && && && && && && && && && && && && && && && && && && && && && && && && && && && &&&Slave2.Hadoop/192.168.1.4:2181. Will not attempt to authenticate using SASL (un& && && && && && && && && && && && && && && && && && && && && && && && && && && && && & known error)
15/04/13 18:13:31 INFO zookeeper.ClientCnxn: Socket connection established to Sl& && && && && && && && && && && && && && && && && && && && && && && && && && && && && & ave2.Hadoop/192.168.1.4:2181, initiating session
15/04/13 18:13:31 WARN zookeeper.ClientCnxnSocket: Connec r& && && && && && && && && && && && && && && && && && && && && && && && && && && && && & -o mode will be unavailable
15/04/13 18:13:31 INFO zookeeper.ClientCnxn: Session establishment complete on s& && && && && && && && && && && && && && && && && && && && && && && && && && && && && & erver Slave2.Hadoop/192.168.1.4:2181, sessionid = 0x34cb1fcf83e0000, negotiated& && && && && && && && && && && && && && && && && && && && && && && && && && && && && &&&timeout = 40000
&font color=&Red&&15/04/13 18:14:31 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HCon; serverName=
&/font&&/div&
复制代码
还请大神,指点一二!!& &小弟不胜感激, 现在这里谢谢,谢谢
主题帖子积分
hbase shell跟hbase master是不一样的。即使master挂掉,hbase shell照样还是能使用的。
hbase shell和下面错误来安
来看是hmaster挂掉了,连接不上
欢迎加入about云群 、 ,云计算爱好者群,关注
主题帖子积分
新手上路, 积分 33, 距离下一级还需 17 积分
新手上路, 积分 33, 距离下一级还需 17 积分
本帖最后由 nobileamir 于
09:10 编辑
hbase shell跟hbase master是不一样的。即使master挂掉,hbase shell照样还是能使用的。
hbase sh ...
首先谢谢版主,现在可以确实是Hbase Master 挂掉了。 进程僵死了(stop-hbase.sh 分节点能关闭,主节点关不掉) !&&从 60010的访问来看, 貌似没有挂掉! 还没找到错误的原因。 还在解决中.....!& &再次感谢!&&
主题帖子积分
新手上路, 积分 33, 距离下一级还需 17 积分
新手上路, 积分 33, 距离下一级还需 17 积分
原来Hbase配置有点问题,修改了hbase-site.xml 添加
&property&
& && &&&&name&hbase.rpc.timeout&/name&
& && &&&&value&1200000&/value&
&/property&
&property&
& && &&&&name&hbase.snapshot.master.timeoutMillis&/name&
& && &&&&value&1200000&/value&
&/property&复制代码
还有 忽略了一个错误:
WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master. Retrying. Error was:
java.net.ConnectException: Connection refused
& && &&&at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
& && &&&at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
& && &&&at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
& && &&&at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1046)
& && &&&at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
复制代码
经常参与各类话题的讨论,发帖内容较有主见
经常帮助其他会员答疑
活跃且尽责职守的版主
Powered by不停出去玩,这才是C6的正确打开方式!_天下车吧_百度贴吧
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&签到排名:今日本吧第个签到,本吧因你更精彩,明天继续来努力!
本吧签到人数:0成为超级会员,使用一键签到本月漏签0次!成为超级会员,赠送8张补签卡连续签到:天&&累计签到:天超级会员单次开通12个月以上,赠送连续签到卡3张
关注:6,786贴子:
不停出去玩,这才是C6的正确打开方式!收藏
买车了最大的一个体验就是想去哪里就去哪里,C6这才入手不到2月,算起来约上好友已经出去游玩过两三次了,要放在平时,可能半年都出去不了一次。
重庆花卉园的郁金香
四川都江堰一日游,逛完回来就下雨。我觉得这才是C6的正确打开方式,想走就走,想玩就玩。这次我们一行四个人,开着C6还挺轻松的,涡轮介入后,速度很快就上来了,开到120了感觉都还有劲儿没释放出来。底盘也比较稳,高速上一点都不漂,很好控制,方向盘精准,没有虚位。本来觉得C6的后备箱挺大的了,但是这次家里两个女人买了一大堆东西,后备箱差点装不下了,不过一般家用这种大包小包的情况还是少数,空间家用足够了。
重庆花卉园怎么样?值得玩儿么
登录百度帐号推荐应用
为兴趣而生,贴吧更懂你。或}

我要回帖

更多关于 如何打开msgex文件 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信