hadoop 2.7.2 pom.xml2.7.1中yarn-site.xml怎么配置

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
您的访问请求被拒绝 403 Forbidden - ITeye技术社区
您的访问请求被拒绝
亲爱的会员,您的IP地址所在网段被ITeye拒绝服务,这可能是以下两种情况导致:
一、您所在的网段内有网络爬虫大量抓取ITeye网页,为保证其他人流畅的访问ITeye,该网段被ITeye拒绝
二、您通过某个代理服务器访问ITeye网站,该代理服务器被网络爬虫利用,大量抓取ITeye网页
请您点击按钮解除封锁&当前位置: &&>> 阅读正文
View: 14,960
Author: Dong
- 359,967 阅 - 273,614 阅 - 261,862 阅 - 247,109 阅 - 245,229 阅 - 243,154 阅 - 223,098 阅 - 214,592 阅 - 211,833 阅 - 204,386 阅
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = '
collapsItems['collapsArch-'] = 'Hadoop YARN配置参数剖析(1)―RM与NM相关参数_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
Hadoop YARN配置参数剖析(1)―RM与NM相关参数
上传于||暂无简介
阅读已结束,如果下载本文需要使用0下载券
想免费下载更多文档?
定制HR最喜欢的简历
你可能喜欢workming 的BLOG
用户名:workming
文章数:126
评论数:221
访问量:921949
注册日期:
阅读量:5863
阅读量:12276
阅读量:349020
阅读量:1048181
51CTO推荐博文
一、服务器分布及相关说明、服务器角色服务角色服务器()()()()、()总体架构二、基础环境部署、安装、安装、免密码登录可参考文章:......、主机名设置三、集群部署、安装、配置与启动在上执行:在上执行:在上执行:启动集群测试集群是否建立成功,如无报错表示集群创建成功四、()集群部署、环境安装Hadoop的源码编译部分可以参考:、配置&?xml version="1.0" encoding="UTF-8"?&
&?xml-stylesheet type="text/xsl" href="configuration.xsl"?&
&configuration&
&property&
&name&fs.defaultFS&/name&
&value&hdfs://appcluster&/value&
&/property&
&property&
&name&io.file.buffer.size&/name&
&value&131072&/value&
&/property&
&property&
&name&hadoop.tmp.dir&/name&
&value&file:/data/hadoop/storage/tmp&/value&
&/property&
&property&
&name&ha.zookeeper.quorum&/name&
&value&172.18.35.29:.35.30:.34.232:2181&/value&
&/property&
&property&
&name&ha.zookeeper.session-timeout.ms&/name&
&value&2000&/value&
&/property&
&property&
&name&fs.trash.interval&/name&
&value&4320&/value&
&/property&
&property&
&name&hadoop.http.staticuser.use&/name&
&value&root&/value&
&/property&
&property&
&name&hadoop.proxyuser.hadoop.hosts&/name&
&value&*&/value&
&/property&
&property&
&name&hadoop.proxyuser.hadoop.groups&/name&
&value&*&/value&
&/property&
&property&
&name&hadoop.native.lib&/name&
&value&true&/value&
&/property&
&/configuration&、配置&?xml version="1.0" encoding="UTF-8"?&
&?xml-stylesheet type="text/xsl" href="configuration.xsl"?&
&configuration&
&property&
&name&dfs.namenode.name.dir&/name&
&value&file:/data/hadoop/storage/hdfs/name&/value&
&/property&
&property&
&name&dfs.datanode.data.dir&/name&
&value&file:/data/hadoop/storage/hdfs/data&/value&
&/property&
&property&
&name&dfs.replication&/name&
&value&2&/value&
&/property&
&property&
&name&dfs.blocksize&/name&
&value&&/value&
&/property&
&property&
&name&dfs.datanode.du.reserved&/name&
&value&&/value&
&/property&
&property&
&name&dfs.webhdfs.enabled&/name&
&value&true&/value&
&/property&
&property&
&name&dfs.permissions&/name&
&value&false&/value&
&/property&
&property&
&name&dfs.permissions.enabled&/name&
&value&false&/value&
&/property&
&property&
&name&dfs.nameservices&/name&
&value&appcluster&/value&
&/property&
&property&
&name&dfs.ha.namenodes.appcluster&/name&
&value&nn1,nn2&/value&
&/property&
&property&
&name&dfs.namenode.rpc-address.appcluster.nn1&/name&
&value&namenode1:8020&/value&
&/property&
&property&
&name&dfs.namenode.rpc-address.appcluster.nn2&/name&
&value&namenode2:8020&/value&
&/property&
&property&
&name&dfs.namenode.servicerpc-address.appcluster.nn1&/name&
&value&namenode1:53310&/value&
&/property&
&property&
&name&dfs.namenode.servicerpc-address.appcluster.nn2&/name&
&value&namenode2:53310&/value&
&/property&
&property&
&name&dfs.namenode.http-address.appcluster.nn1&/name&
&value&namenode1:8080&/value&
&/property&
&property&
&name&dfs.namenode.http-address.appcluster.nn2&/name&
&value&namenode2:8080&/value&
&/property&
&property&
&name&dfs.datanode.http.address&/name&
&value&0.0.0.0:8080&/value&
&/property&
&property&
&name&dfs.namenode.shared.edits.dir&/name&
&value&qjournal://namenode1:8485;namenode2:8485;datanode1:8485/appcluster&/value&
&/property&
&property&
&name&dfs.client.failover.proxy.provider.appcluster&/name&
&value&org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&/value&
&/property&
&property&
&name&dfs.ha.fencing.methods&/name&
&value&sshfence(root:36000)&/value&
&/property&
&property&
&name&dfs.ha.fencing.ssh.private-key-files&/name&
&value&/root/.ssh/id_dsa_nn1&/value&
&/property&
&property&
&name&dfs.ha.fencing.ssh.connect-timeout&/name&
&value&30000&/value&
&/property&
&property&
&name&dfs.journalnode.edits.dir&/name&
&value&/data/hadoop/storage/hdfs/journal&/value&
&/property&
&property&
&name&dfs.ha.automatic-failover.enabled&/name&
&value&true&/value&
&/property&
&property&
&name&ha.failover-controller.cli-check.rpc-timeout.ms&/name&
&value&60000&/value&
&/property&
&property&
&name&ipc.client.connect.timeout&/name&
&value&60000&/value&
&/property&
&property&
&name&dfs.image.transfer.bandwidthPerSec&/name&
&value&&/value&
&/property&
&property&
&name&dfs.namenode.accesstime.precision&/name&
&value&3600000&/value&
&/property&
&property&
&name&dfs.datanode.max.transfer.threads&/name&
&value&4096&/value&
&/property&
&/configuration&、配置&?xml version="1.0"?&
&?xml-stylesheet type="text/xsl" href="configuration.xsl"?&
&configuration&
&property&
&name&mapreduce.framework.name&/name&
&value&yarn&/value&
&/property&
&property&
&name&mapreduce.jobhistory.address&/name&
&value&namenode1:10020&/value&
&/property&
&property&
&name&mapreduce.jobhistory.webapp.address&/name&
&value&namenode1:19888&/value&
&/property&
&/configuration&、配置&?xml version="1.0"?&
&configuration&
&property&
&name&yarn.nodemanager.aux-services&/name&
&value&mapreduce_shuffle&/value&
&/property&
&property&
&name&yarn.nodemanager.aux-services.mapreduce.shuffle.class&/name&
&value&org.apache.hadoop.mapred.ShuffleHandler&/value&
&/property&
&property&
&name&yarn.resourcemanager.scheduler.address&/name&
&value&namenode1:8030&/value&
&/property&
&property&
&name&yarn.resourcemanager.resource-tracker.address&/name&
&value&namenode1:8031&/value&
&/property&
&property&
&name&yarn.resourcemanager.address&/name&
&value&namenode1:8032&/value&
&/property&
&property&
&name&yarn.resourcemanager.admin.address&/name&
&value&namenode1:8033&/value&
&/property&
&property&
&name&yarn.nodemanager.address&/name&
&value&namenode1:8034&/value&
&/property&
&property&
&name&yarn.nodemanager.webapp.address&/name&
&value&namenode1:80&/value&
&/property&
&property&
&name&yarn.resourcemanager.webapp.address&/name&
&value&namenode1:80&/value&
&/property&
&property&
&name&yarn.nodemanager.local-dirs&/name&
&value&${hadoop.tmp.dir}/nodemanager/local&/value&
&/property&
&property&
&name&yarn.nodemanager.remote-app-log-dir&/name&
&value&${hadoop.tmp.dir}/nodemanager/remote&/value&
&/property&
&property&
&name&yarn.nodemanager.log-dirs&/name&
&value&${hadoop.tmp.dir}/nodemanager/logs&/value&
&/property&
&property&
&name&yarn.nodemanager.log.retain-seconds&/name&
&value&604800&/value&
&/property&
&property&
&name&yarn.nodemanager.resource.cpu-vcores&/name&
&value&16&/value&
&/property&
&property&
&name&yarn.nodemanager.resource.memory-mb&/name&
&value&50320&/value&
&/property&
&property&
&name&yarn.scheduler.minimum-allocation-mb&/name&
&value&256&/value&
&/property&
&property&
&name&yarn.scheduler.maximum-allocation-mb&/name&
&value&40960&/value&
&/property&
&property&
&name&yarn.scheduler.minimum-allocation-vcores&/name&
&value&1&/value&
&/property&
&property&
&name&yarn.scheduler.maximum-allocation-vcores&/name&
&value&8&/value&
&/property&
&/configuration&【注意:上面的第68`96行部分,需要根据服务器的硬件配置进行修改】、配置、、【在开头添加】文件路径:添加内容:、数据节点配置、集群启动、在上执行,创建命名空间、在对应的节点上启动日志程序、格式化主节点()、启动主节点、格式备节点()、启动备节点()、在两个节点(、)上执行、启动所有的节点(、)、启动()主节点上的信息:节点上的信息:、测试是否可用、的功能测试切换前的状态:切换后的状态:、后续维护的关闭与启动:的关闭与启动:【注意】需要在节点上执行。五、集群部署、安装与配置Spark的源码编译请参考:、相关测试、本地模式、普通集群模式、结合的集群模式、基于模式、最终的目录结构及相关配置目录结构:“”中的环境变量设置:本文出自 “” 博客,请务必保留此出处
了这篇文章
类别:┆阅读(0)┆评论(0)2029人阅读
hadoop配置详解(6)
在YARN中,资源管理由ResourceManager和NodeManager共同完成,其中,ResourceManager中的调度器负责资源的分配,而NodeManager则负责资源的供给和隔离。ResourceManager将某个NodeManager上资源分配给任务(这就是所谓的“资源调度”)后,NodeManager需按照要求为任务提供相应的资源,甚至保证这些资源应具有独占性,为任务运行提供基础的保证,这就是所谓的资源隔离。
基于以上考虑,YARN允许用户配置每个节点上可用的物理内存资源,注意,这里是“可用的”,因为一个节点上的内存会被若干个服务共享,比如一部分给YARN,一部分给HDFS,一部分给HBase等,YARN配置的只是自己可以使用的,配置参数如下:
(1)yarn.nodemanager.resource.memory-mb
表示该节点上YARN可使用的物理内存总量,默认是8192(MB),注意,如果你的节点内存资源不够8GB,则需要调减小这个&#20540;,而YARN不会智能的探测节点的物理内存总量。
(2)yarn.nodemanager.vmem-pmem-ratio
任务每使用1MB物理内存,最多可使用虚拟内存量,默认是2.1。
(3) yarn.nodemanager.pmem-check-enabled
是否启动一个线程检查每个任务正使用的物理内存量,如果任务超出分配&#20540;,则直接将其杀掉,默认是true。
(4) yarn.nodemanager.vmem-check-enabled
是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配&#20540;,则直接将其杀掉,默认是true。
(5)yarn.scheduler.minimum-allocation-mb
单个任务可申请的最少物理内存量,默认是1024(MB),如果一个任务申请的物理内存量少于该&#20540;,则该对应的&#20540;改为这个数。
(6)yarn.scheduler.maximum-allocation-mb
单个任务可申请的最多物理内存量,默认是8192(MB)。
默认情况下,YARN采用了线程监控的方法判断任务是否超量使用内存,一旦发现超量,则直接将其杀死。由于Cgroups对内存的控制缺乏灵活性(即任务任何时刻不能超过内存上限,如果超过,则直接将其杀死或者报OOM),而Java进程在创建瞬间内存将翻倍,之后骤降到正常&#20540;,这种情况下,采用线程监控的方式更加灵活(当发现进程树内存瞬间翻倍超过设定&#20540;时,可认为是正常现象,不会将任务杀死),因此YARN未提供Cgroups内存隔离机制。
可以使用如下命令在提交任务时动态设置:
hadoop jar &jarName& -D mapreduce.reduce.memory.mb=5120
[hadoop@cMaster hadoop-2.5.2]$ ./bin/hadoop jar /home/hadoop/jar-output/TestLoop-1024M.jar -D&mapreduce.map.memory.mb=5120 AESEnTest
&后面的1024及两个1均为jar的输入参数。
Hadoop2.5.2搭建好之后,运行写好的MapReduce程序出现如下问题:
Container [pid=24156,containerID=container_1_002] is running beyond physical memory limits. Current usage: 2.1 GB of 2 GB
2.7 GB of 4.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_1_002 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
(bash) 0 0 6 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2048m -Djava.io.tmpdir=/home/hadoop/hadoop-2.5.2/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1_0019/container_1_002/tmp
-Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop/hadoop-2.5.2/logs/userlogs/application_1_0019/container_1_002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild
192.168.199.93 33497 attempt_1_0019_m_ 2 1&/home/hadoop/hadoop-2.5.2/logs/userlogs/application_1_0019/containe...
根据前面所述的内存配置相关理论知识,我们可以总结如下:
(RM, Resource M NM, Node M AM, Application Manager)
RM内存资源配置——两个参数(yarn-site.xml)
&property&
&description&The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won‘t take effect,
and the specified value will get allocated at minimum.&/description&
&name&yarn.scheduler.minimum-allocation-mb&/name&
&value&1024&/value&
&/property&
&property&
&description&The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won‘t take effect,
and will get capped to this value.&/description&
&name&yarn.scheduler.maximum-allocation-mb&/name&
&value&8192&/value&
&/property&
它们表示单个容器可以申请的最小与最大内存。
NM(yarn-site.xml)
&property&
&description&Amount of physical memory, in MB, that can be allocated
for containers.&/description&
&name&yarn.nodemanager.resource.memory-mb&/name&
&value&8192&/value&
&/property&
&property&
&description&Ratio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage
is allowed to exceed this allocation by this ratio.
&/description&
&name&yarn.nodemanager.vmem-pmem-ratio&/name&
&value&2.1&/value&
&/property&
前者表示单个节点可用的最大内存,RM中的两个&#20540;都不应该超过该&#20540;。
后者表示虚拟内存率,即占task所用内存的百分比,默认为2.1.
AM(mapred-site.xml)
mapreduce.map.memory.mb
mapreduce.reduce.memory.mb
指定map和reduce task的内存大小,该&#20540;应该在RM的最大最小container之间。如果不设置,则默认用以下规则进行计算:max{MIN_Container_Size,(Total Available RAM/containers)}。
一般地,reduce设置为map的两倍。
AM的其他参数设置:
mapreduce.map.java.opts
mapreduce.reduce.java.opts
这两个参数是伪需要运行JVM程序(java,scala等)准备,通过这两个参数可以向JVM中传递参数,与内存有关的是-Xmx, -Xms等选项,数&#20540;的大小应该要再AM中的map.mb和reduce.mb之间。
对如上问题,我选择使用以下方式来解决:(根据提交的job动态设置mapreduce.map.memory.mb的&#20540;)
[hadoop@cMaster hadoop-2.5.2]$ ./bin/hadoop jar /home/hadoop/jar-output/TestLoop-1024M.jar -D&mapreduce.map.memory.mb=5120 AESEnTest
&参考资料:
/hc/en-us/articles/-Configuring-Memory-for-Mappers-and-Reducers-in-Hadoop-2
/questions//container-is-running-beyond-memory-limits
http://dongxicheng.org/mapreduce-nextgen/hadoop-yarn-memory-cpu-scheduling/
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:63972次
积分:1657
积分:1657
排名:第18302名
原创:100篇
转载:49篇
(24)(1)(2)(5)(16)(2)(5)(11)(8)(37)(1)(14)(3)(19)}

我要回帖

更多关于 hadoop 2.7.3 xml 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信