hadoop 集群,关闭hadoop datanode 配置失败,sbin/hadoop-daemons.sh : Temporary failure in name resolution

1516人阅读
记忆碎片(10)
问题描述:
开启Hadoop-2.2.0时,出现如下信息:
[root@hd-m1 /]# ./hadoop/hadoop-2.6.0/sbin/start-all.sh&
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/01/23 20:23:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) Client VM warning: You have loaded library /hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c &libfile&', or link it with '-z noexecstack'.
sed: -e expression #1, char 6: unknown option to `s'
-c: Unknown cipher type 'cd'
hd-m1: starting namenode, logging to /hadoop/hadoop-2.6.0/logs/hadoop-root-namenode-hd-m1.out
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Temporary failure in name resolution
Java: ssh: Could not resolve hostname Java: Temporary failure in name resolution
Client: ssh: Could not resolve hostname Client: Temporary failure in name resolution
You: ssh: Could not resolve hostname You: Temporary failure in name resolution
warning:: ssh: Could not resolve hostname warning:: Temporary failure in name resolution
VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
library: ssh: Could not resolve hostname library: Temporary failure in name resolution
loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution
might: ssh: Could not resolve hostname might: Temporary failure in name resolution
which: ssh: Could not resolve hostname which: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
disabled: ssh: Could not resolve hostname disabled: Temporary failure in name resolution
stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution
guard.: ssh: Could not resolve hostname guard.: Temporary failure in name resolution
VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution
The: ssh: Could not resolve hostname The: Temporary failure in name resolution
try: ssh: Could not resolve hostname try: Temporary failure in name resolution
will: ssh: Could not resolve hostname will: Temporary failure in name resolution
to: ssh: Could not resolve hostname to: Temporary failure in name resolution
fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution
the: ssh: Could not resolve hostname the: Temporary failure in name resolution
stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution
guard: ssh: Could not resolve hostname guard: Temporary failure in name resolution
It's: ssh: Could not resolve hostname It's: Temporary failure in name resolution
now.: ssh: Could not resolve hostname now.: Temporary failure in name resolution
recommended: ssh: Could not resolve hostname recommended: Temporary failure in name resolution
highly: ssh: Could not resolve hostname highly: Temporary failure in name resolution
that: ssh: Could not resolve hostname that: Temporary failure in name resolution
you: ssh: Could not resolve hostname you: Temporary failure in name resolution
with: ssh: Could not resolve hostname with: Temporary failure in name resolution
'execstack: ssh: Could not resolve hostname 'execstack: Temporary failure in name resolution
the: ssh: Could not resolve hostname the: Temporary failure in name resolution
library: ssh: Could not resolve hostname library: Temporary failure in name resolution
fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution
& libfile&',: ssh: Could not resolve hostname &libfile&',: Temporary failure in name resolution
or: ssh: Could not resolve hostname or: Temporary failure in name resolution
link: ssh: Could not resolve hostname link: Temporary failure in name resolution
it: ssh: Could not resolve hostname it: Temporary failure in name resolution
'-z: ssh: Could not resolve hostname '-z: Temporary failure in name resolution
with: ssh: Could not resolve hostname with: Temporary failure in name resolution
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Temporary failure in name resolution
hd-s1: starting datanode, logging to /hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-hd-s1.out
hd-s2: starting datanode, logging to /hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-hd-s2.out
Starting secondary namenodes [Java HotSpot(TM) Client VM warning: You have loaded library /hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c &libfile&', or link it with '-z noexecstack'.
SecondaryNameNode]
sed: -e expression #1, char 6: unknown option to `s'
-c: Unknown cipher type 'cd'
Client: ssh: Could not resolve hostname Client: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
You: ssh: Could not resolve hostname You: Temporary failure in name resolution
Java: ssh: Could not resolve hostname Java: Temporary failure in name resolution
library: ssh: Could not resolve hostname library: Temporary failure in name resolution
loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution
VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution
might: ssh: Could not resolve hostname might: Temporary failure in name resolution
stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution
fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution
to: ssh: Could not resolve hostname to: Temporary failure in name resolution
the: ssh: Could not resolve hostname the: Temporary failure in name resolution
guard: ssh: Could not resolve hostname guard: Temporary failure in name resolution
now.: ssh: Could not resolve hostname now.: Temporary failure in name resolution
It's: ssh: Could not resolve hostname It's: Temporary failure in name resolution
disabled: ssh: Could not resolve hostname disabled: Temporary failure in name resolution
highly: ssh: Could not resolve hostname highly: Temporary failure in name resolution
that: ssh: Could not resolve hostname that: Temporary failure in name resolution
recommended: ssh: Could not resolve hostname recommended: Temporary failure in name resolution
stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution
try: ssh: Could not resolve hostname try: Temporary failure in name resolution
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Temporary failure in name resolution
fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution
the: ssh: Could not resolve hostname the: Temporary failure in name resolution
library: ssh: Could not resolve hostname library: Temporary failure in name resolution
'execstack: ssh: Could not resolve hostname 'execstack: Temporary failure in name resolution
warning:: ssh: Could not resolve hostname warning:: Temporary failure in name resolution
with: ssh: Could not resolve hostname with: Temporary failure in name resolution
or: ssh: Could not resolve hostname or: Temporary failure in name resolution
& libfile&',: ssh: Could not resolve hostname &libfile&',: Temporary failure in name resolution
you: ssh: Could not resolve hostname you: Temporary failure in name resolution
link: ssh: Could not resolve hostname link: Temporary failure in name resolution
it: ssh: Could not resolve hostname it: Temporary failure in name resolution
which: ssh: Could not resolve hostname which: Temporary failure in name resolution
with: ssh: Could not resolve hostname with: Temporary failure in name resolution
The: ssh: Could not resolve hostname The: Temporary failure in name resolution
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Temporary failure in name resolution
'-z: ssh: Could not resolve hostname '-z: Temporary failure in name resolution
will: ssh: Could not resolve hostname will: Temporary failure in name resolution
SecondaryNameNode: ssh: Could not resolve hostname SecondaryNameNode: Temporary failure in name resolution
guard.: ssh: Could not resolve hostname guard.: Temporary failure in name resolution
15/01/23 20:24:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /hadoop/hadoop-2.6.0/logs/yarn-root-resourcemanager-hd-m1.out
hd-s1: starting nodemanager, logging to /hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-hd-s1.out
hd-s2: starting nodemanager, logging to /hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-hd-s2.out
解决办法:
出现上述问题主要是环境变量没设置好,在~/.bash_profile或者/etc/profile中加入以下语句就没问题了。
  #vi /etc/profile或者vi ~/.bash_profile
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export HADOOP_OPTS=&-Djava.library.path=$HADOOP_HOME/lib&
然后用source重新编译使之生效即可!
  #source /etc/profile或者source ~/.bash_profile
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:31895次
排名:千里之外
原创:30篇
转载:10篇
(1)(1)(1)(5)(1)(1)(5)(1)(1)(1)(4)(1)(8)(4)(2)(5)Hadoop(9)
这几天终于把Hadoop折腾起来了,自从本科毕设做完就再没接触过,现在研究生论文又拾起来了,不过以前用的是Hadoop0.20.0,最新的稳定版本已经升到2.7.1了,对于我这停留在原先版本的人来说,改动还是挺大的。
先说一下刚搭建运行时报的错误:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) Client VM warning: You have loaded library /hadoop/hadoop-2.7.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c &libfile&', or link it with '-z noexecstack'.
SecondaryNameNode]
sed: -e expression
-c: Unknown cipher type 'cd'
Client: ssh: Could not resolve hostname Client: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
You: ssh: Could not resolve hostname You: Temporary failure in name resolution
Java: ssh: Could not resolve hostname Java: Temporary failure in name resolution
library: ssh: Could not resolve hostname library: Temporary failure in name resolution
loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution
VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution
might: ssh: Could not resolve hostname might: Temporary failure in name resolution
stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution
fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution
to: ssh: Could not resolve hostname to: Temporary failure in name resolution
the: ssh: Could not resolve hostname the: Temporary failure in name resolution
解决方法:
在shell中输入 vi /etc/profile或者vi ~/.bash_profile。
写入环境变量
HADOOP_HOME=/home/hadoop/labc/hadoop-2.7.1
PATH=$PATH:$HADOOP_HOME/bin
PATH=$PATH:$HADOOP_HOME/sbin
HADOOP_MAPRED_HOME=$HADOOP_HOME
HADOOP_COMMON_HOME=$HADOOP_HOME
HADOOP_HDFS_HOME=$HADOOP_HOME
YARN_HOME=$HADOOP_HOME
HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
配置完记得重新编译,source /etc/profile或者source ~/.bash_profile。
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:13037次
排名:千里之外
原创:63篇
转载:12篇
阅读:1754
(1)(2)(3)(15)(3)(2)(4)(6)(10)(2)(2)(6)(2)(5)(1)(1)(7)(3)Hadoop2.2.0完全分布式集群平台安装与设置 – 过往记忆
欢迎关注Hadoop、Hive、Hbase、Flume等微信公共账号:iteblog_hadoop。
文章总数:622
浏览总数:6,371,173
评论:3414
分类目录:69 个
注册用户数:1087
最后更新:日
欢迎关注微信公共帐号:iteblog_hadoop
关注互联网头条技术
  如果你想搭建伪分布式平台,请参见本博客
  经过好多天的各种折腾,终于在几台电脑里面配置好了2.2.0分布式系统,现在总结一下如何配置。
  前提条件:
  (1)、首先在每台电脑上面安装好JDK6或其以上版本,并设置好JAVA_HOME等,测试一下java、javac、jps等命令是否可以在终端使用,如何配置JDK这里就不说了;
  (2)、在每台上安装好SSH,如何安装请参加。后面会说如何配置SSH无密码登录。
  有了上面的前提条件之后,我们接下来就可以进行安装分布式平台了。步骤如下:
  1、先设定电脑的IP为静态地址:
  由于各个Linux发行版本静态IP的设置不一样,这里将介绍CentOS、Ubunt、Fedora 19静态IP的设置步骤:
  (1)、CentOS静态IP地址设置步骤如下:
[wyp@wyp hadoop]$ sudo vim /etc/sysconfig/network-scripts/ifcfg-eth0
在里面添加下面语句:
IPADDR=192.168.142.139
NETMASK=255.255.255.0
NETWORK=192.168.0.0
里面的IPADDR地址设置你想要的,我这里是192.168.142.139。
设置好后,需要让IP地址生效,运行下面命令:
[wyp@wyp hadoop]$ sudo service network restart
Shutting down interface eth0:
Device state: 3 (disconnected)
Shutting down loopback interface:
Bringing up loopback interface:
Bringing up interface eth0:
Active connection state: activated
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/7
[wyp@wyp hadoop]$
然后运行ifconfig检验一下设置是否生效:
[wyp@wyp hadoop]$ ifconfig
Link encap:Ethernet
HWaddr 00:0C:29:9F:FB:C0
inet addr:192.168.142.139
Bcast:192.168.142.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe9f:fbc0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:389330 errors:0 dropped:0 overruns:0 frame:0
TX packets:171679 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes: (451.6 MiB)
TX bytes:.7 MiB)
Link encap:Local Loopback
inet addr:127.0.0.1
Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING
RX packets:80221 errors:0 dropped:0 overruns:0 frame:0
TX packets:80221 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes: (1002.4 MiB)
TX bytes: (1002.4 MiB)
[wyp@wyp hadoop]$
可见IP地址已经设置为192.168.142.139了!
  (2)、Ubuntu静态IP地址设置步骤如下:
wyp@node1:~$ sudo vim /etc/network/interfaces
在里面添加:
iface eth0 inet static
address 192.168.142.140
netmask 255.255.255.0
gateway 192.168.142.1
同样需要让IP地址生效:
wyp@node1:~$ sudo /etc/init.d/networking restart
同样也是输入ifconfig来检验IP设置是否生效,这里就不说了。
  (3)、Fedora 19静态IP地址设置步骤如下(Fedora其他版本的静态IP设置和19版本不一样,这里就不给出了):
[wyp@wyp network-scripts]$ sudo vim /etc/sysconfig/network-scripts/ifcfg-ens33
在里面添加:
IPADDR0=192.168.142.138
NETMASK0=255.255.255.0
GATEWAY0=192.168.142.0
设置好后,需要让IP地址生效,运行下面命令:
[wyp@wyp network-scripts]$ sudo service network restart
Restarting network (via systemctl):
同样也是输入ifconfig来检验IP设置是否生效,这里就不说了。
  2、设置各个主机的hostname
  在步骤1中,我分别配置了CentOS、Ubuntu以及Fedora三台主机,我打算用它们作为集群中的电脑,其中Fedora主机作为master,其余的两台电脑作为slave。这步将说说如何修改这三台电脑的hostname:
  (1)、Fedora19 设置hostname步骤如下:
[wyp@wyp network-scripts]$ sudo hostnamectl set-hostname master
查看设置是否生效,运行下面命令
[wyp@wyp network-scripts]$ hostname
  (2)、Ubuntu设置hostname步骤如下:
wyp@node1:~$ sudo vim /etc/hostname
在里面添加自己需要取的hostname,我这里是取node1。
查看设置是否生效,运行下面命令
wyp@node1:~$ hostname
  (3)、CentOS设置hostname步骤如下:
[wyp@node network-scripts]$ sudo vim /etc/sysconfig/network
将里面的HOSTNAME修改为你想要的hostname,我这里是取node
HOSTNAME=node
查看设置是否生效,运行下面命令
[wyp@node network-scripts]$ hostname
  3、在以上三台电脑的/etc/hosts添加以下配置:
[wyp@master ~]$ sudo vim /etc/hosts
在里面添加以下语句
192.168.142.138 master
192.168.142.139 node
192.168.142.140 node1
其实就是上面三台电脑的静态IP地址和其hostname的对应关系。检验是否修改生效,可以用ping来查看:
[wyp@master ~]$ ping node
PING node (192.168.142.139) 56(84) bytes of data.
64 bytes from node (192.168.142.139): icmp_seq=1 ttl=64 time=0.541 ms
64 bytes from node (192.168.142.139): icmp_seq=2 ttl=64 time=0.220 ms
--- node ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.220/0.380/0.541/0.161 ms
[wyp@master ~]$
如果上面的命令可以ping通,说明设置生效了。
  4、设置SSH无密码登陆
  在本博客里面已经介绍了如何安装SSH(),和怎么设置SSH无密码登陆(),这里主要是想说一下需要注意的事项,首先在master主机上面设置好了SSH无密码登陆之后,然后将生成的id_dsa.pub文件拷贝到node和node1上面去,可以运行下面的命令:
[wyp@localhost ~]$ cat /home/wyp/.ssh/id_dsa.pub |
ssh wyp@192.168.142.139 'cat - && ~/.ssh/authorized_keys'
  要确保192.168.142.139主机的SSH服务是运行的。wyp@192.168.142.139的wyp是你需要登录192.168.142.139主机的用户名。同样,你也可以用上面类似的命令将id_dsa.pub拷贝到192.168.142.140主机上面去。
  当然,你也可以用scp命令将文件拷贝到相应的主机:
[wyp@master Documents]$ scp /home/wyp/.ssh/id_dsa.pub
wyp@192.168.142.139:~/.ssh/authorized_keys
检验是否可以从master无密码登录node和node1,可以用下面的命令:
[wyp@master Documents]$ ssh node
The authenticity of host 'node (192.168.142.139)' can't be established.
RSA key fingerprint is ae:99:43:f0:cf:c6:a9:82:6c:93:a1:65:54:70:a6:97.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node,192.168.142.139' (RSA)
to the list of known hosts.
Last login: Wed Nov
6 14:54:55 2013 from master
[wyp@node ~]$
  第一次运行上面的命令会出现上述信息。上面[wyp@node ~]已经暗示了我们成功从master无密码登录node;如果在登陆过程中出现了需要输入密码才能登录node,说明SSH无密码登录没成功,一般都是文件权限的问题,解决方法请参照。
  5、下载好Hadoop,这里用到的是hadoop-2.2.0.tar.gz,你可以用下面的命令去下载:
  下面的操作都是在master机器上进行的。
[wyp@wyp /home]$ mkdir /home/wyp/Downloads/hadoop
[wyp@wyp /home]$ cd /home/wyp/Downloads/hadoop
[wyp@wyp hadoop]$ wget \
http://mirror./apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz
运行完上面的命令之后,hadoop-2.2.0.tar.gz文件将会保存在/home/wyp/Downloads/hadoop里面,请解压它
[wyp@wyp hadoop]$ tar- zxvf hadoop-2.2.0.tar.gz
之后将会在hadoop文件夹下面生成hadoop-2.2.0文件夹,运行下面的命令
[wyp@wyp hadoop]$ cd hadoop-2.2.0
[wyp@wyp hadoop-2.2.0]$ ls -l
drwxr-xr-x. 2 wyp wyp
7 14:38 bin
drwxr-xr-x. 3 wyp wyp
7 14:38 etc
drwxr-xr-x. 2 wyp wyp
7 14:38 include
drwxr-xr-x. 3 wyp wyp
7 14:38 lib
drwxr-xr-x. 2 wyp wyp
7 14:38 libexec
-rw-r--r--. 1 wyp wyp 15164 Oct
7 14:46 LICENSE.txt
drwxrwxr-x. 3 wyp wyp
4096 Oct 28 14:38 logs
-rw-r--r--. 1 wyp wyp
7 14:46 NOTICE.txt
-rw-r--r--. 1 wyp wyp
7 14:46 README.txt
drwxr-xr-x. 2 wyp wyp
4096 Oct 28 12:37 sbin
drwxr-xr-x. 4 wyp wyp
7 14:38 share
显示出刚刚解压文件的文件夹。
  6、配置Hadoop的环境变量
[wyp@wyp hadoop]$ sudo vim /etc/profile
在/etc/profile文件的末尾加上以下配置
export HADOOP_DEV_HOME=/home/wyp/Downloads/hadoop/hadoop-2.2.0
export PATH=$PATH:$HADOOP_DEV_HOME/bin
export PATH=$PATH:$HADOOP_DEV_HOME/sbin
export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
然后按:wq保存。为了让刚刚的设置生效,运行下面的命令
[wyp@wyp hadoop]$ sudo source /etc/profile
在终端输入hadoop命令查看Hadoop的环境变量是否生效:
[wyp@node ~]$ hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
run a generic filesystem user client
print the version
run a jar file
checknative [-a|-h]
check native hadoop and compression libraries
availability
distcp &srcurl& &desturl& copy file or directories recursively
archive -archiveName NAME -p &parent path& &src&* &dest& create
a hadoop archive
prints the class path needed to get the
Hadoop jar and the required libraries
get/set the log level for each daemon
run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
[wyp@node ~]$
如果显示上面的信息,说明环境变量生效了,如果显示不了,重启一下电脑再试试。
  7、修改Hadoop的配置文件
修改Hadoop的hadoop-env.sh配置文件,设置jdk所在的路径:
[wyp@wyp hadoop]$ vim etc/hadoop/hadoop-env.sh
在里面找到JAVA_HOME,并将它的值设置为你电脑jdk所在的绝对路径
# The java implementation to use.
export JAVA_HOME=/home/wyp/Downloads/jdk1.7.0_45
依次修改core-site.xml、yarn-site.xml、mapred-site.xml和hdfs-site.xml配置文件
----------------core-site.xml
&property&
  &name&fs.default.name&/name&
  &value&hdfs://master:8020&/value&
  &final&true&/final&
&/property&
&property&
  &name&hadoop.tmp.dir&/name&
  &value&/home/wyp/cloud/tmp/hadoop2.0&/value&
&/property&
------------------------- yarn-site.xml
&property&
  &name&yarn.resourcemanager.address&/name&
  &value&master:8032&/value&
&/property&
&property&
  &name&yarn.resourcemanager.scheduler.address&/name&
  &value&master:8030&/value&
&/property&
&property&
  &name&yarn.resourcemanager.resource-tracker.address&/name&
  &value&master:8031&/value&
&/property&
&property&
  &name&yarn.resourcemanager.admin.address&/name&
  &value&master:8033&/value&
&/property&
&property&
  &name&yarn.resourcemanager.webapp.address&/name&
  &value&master:8088&/value&
&/property&
&property&
&name&yarn.nodemanager.aux-services&/name&
&value&mapreduce_shuffle&/value&
&/property&
&property&
&name&yarn.nodemanager.aux-services.mapreduce.shuffle.class&/name&
&value&org.apache.hadoop.mapred.ShuffleHandler&/value&
&/property&
------------------------ mapred-site.xml
&property&
&name&mapreduce.framework.name&/name&
&value&yarn&/value&
&/property&
&property&
&name&mapred.system.dir&/name&
&value&file:/hadoop/mapred/system/&/value&
&final&true&/final&
&/property&
&property&
&name&mapred.local.dir&/name&
&value&file:/opt/cloud/hadoop_space/mapred/local&/value&
&final&true&/final&
&/property&
----------- hdfs-site.xml
&property&
&name&dfs.namenode.name.dir&/name&
&value&file:/opt/cloud/hadoop_space/dfs/name&/value&
&final&true&/final&
&/property&
&property&
&name&dfs.datanode.data.dir&/name&
&value&file:/opt/cloud/hadoop_space/dfs/data&/value&
&description&Determines where on the local
filesystem an DFS data node should store its blocks.
If this is a comma-delimited list of directories,
then data will be stored in all named
directories, typically on different devices.
Directories that do not exist are ignored.
&/description&
&final&true&/final&
&/property&
&property&
&name&dfs.replication&/name&
&value&1&/value&
&/property&
&property&
&name&dfs.permissions&/name&
&value&false&/value&
&/property&
配置好Hadoop的相关东西之后,请将hadoop-2.2.0整个文件夹分别拷贝到node和node1主机上面去,设置都不需要改!
  8、关掉master、node和node1的防火墙
如果在node上启动nodemanager,遇到java.net.NoRouteToHostException异常
java.net.NoRouteToHostException: No Route to Host from
localhost.localdomain/192.168.142.139 to 192.168.142.138:8031
failed on socket timeout exception: java.net.NoRouteToHostException:
N For more details see:
http://wiki.apache.org/hadoop/NoRouteToHost
..................省略了好多东西
Caused by: java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
..................省略了好多东西
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
at org.apache.hadoop.ipc.Client.call(Client.java:1318)
... 19 more
说明了没有关闭防火墙,各个linux平台关闭防火墙的方法不一样,这里也分享一下:
  (1)、对于ubuntu关闭防火墙
可以运行:ufw disable
如果你要防火墙可以运行: apt-get remove iptables
  (2)、对于fedora关闭防火墙可以运行:
[wyp@wyp hadoop]$
sudo systemctl stop firewalld.service
[wyp@wyp hadoop]$
sudo systemctl disable firewalld.service
  9、查看Hadoop是否运行成功
  首先在master上面格式化一下HDFS,如下命令
[wyp@wyp hadoop]$
cd $hadoop_home
[wyp@wyp hadoop-2.2.0]$
hdfs namenode -format
13/10/28 16:47:33 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
..............此处省略好多文字......................
************************************************************/
13/10/28 16:47:33 INFO namenode.NameNode: registered UNIX signal
handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-d3--d83e120cacd6
13/10/28 16:47:34 INFO namenode.HostFileManager: read includes:
13/10/28 16:47:34 INFO namenode.HostFileManager: read excludes:
..............此处也省略好多文字......................
13/10/28 16:47:38 INFO util.ExitUtil: Exiting with status 0
13/10/28 16:47:38 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at wyp/192.168.142.138
************************************************************/
[wyp@wyp hadoop-2.2.0]$
在master中启动 namenode 和 resourcemanager
[wyp@wyp hadoop-2.2.0]$ sbin/hadoop-daemon.sh start namenode
[wyp@wyp hadoop-2.2.0]$ sbin/yarn-daemon.sh start resourcemanager
在node和node1中启动datanode 和 nodemanager
[wyp@wyp hadoop-2.2.0]$ sbin/hadoop-daemon.sh start datanode
[wyp@wyp hadoop-2.2.0]$ sbin/yarn-daemon.sh start nodemanager
检查Hadoop集群是否安装好了,在master上面运行jps,如果有NameNode、ResourceManager二个进程,说明master安装好了。
[wyp@master hadoop]$ jps
2016 NameNode
2602 ResourceManager
在node(node1)上面运行jps,如果有DataNode、NodeManager二个进程,说明node(node1)安装好了。
[wyp@node network-scripts]$ jps
7889 DataNode
7979 NodeManager
本博客文章除特别声明,全部都是原创!
尊重原创,转载请注明: 转载自
下面文章您可能感兴趣}

我要回帖

更多关于 datanode 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信