DLFORKT2.tmp是什么文件夹

当前访客身份:游客 [
所写文章基本上是工作中遇到问题解决问题的相关内容。
:mark,持续关注
:引用来自“dimdim”的评论发现都是配置安装,有没...
:引用来自“redhat1520”的评论start-all.sh这个脚...
:引用来自“dimdim”的评论发现都是配置安装,有没...
:引用来自“redhat1520”的评论start-all.sh这个脚...
:引用来自“redhat1520”的评论你修改的mapred-si...
:没必要start-all,记得思路就可以了,先启dfs,然后...
:start-all.sh这个脚本已经是没有效的。之前打漏字...
:2.2.0启动hadoop,start-all.sh这个脚本已经是没...
今日访问:0
昨日访问:23
本周访问:38
本月访问:1154
所有访问:12384
centos6.4 32/64位机 hadoop2.2.0集群安装
发表于1年前( 11:19)&&
阅读(6976)&|&评论()
0人收藏此文章,
centos6.4 32/64位机 hadoop2.2.0集群安装
1、准备环境&
& & &安装VMware10 ,三台centos6.4 版本,安装在VMware虚拟机下。
& 1)安装中文输入法:
&&&&& &1、需要root权限,所以要用root登录 ,或su root
& & & & 2、yum install "@Chinese Support"
& &2)安装ssh或者vsftp
& & & &使用chkconfig --list来查看是否装有vsftpd服务;
& & & &使用yum命令直接安装: &yum install vsftpd&
& & & &查看与管理ftp服务:
& & & &启动ftp服务:service vsftpd start
& & & & 查看ftp服务状态:service vsftpd status
& & & & 重启ftp服务:service vsftpd restart
& & & & 关闭ftp服务:service vsftpd stop
&&&&3)jdk安装&
2、修改主机名&
& & & 本人安装一个虚拟机,然后通过虚拟机-》管理-》克隆 完成其他两台机器的安装,现在存在的一个问题就是主机名是一样的,这个明显不是自己想要的,所以需要修改其余两台的主机名。
& [root@slaver2 sysconfig]# vi /etc/sysconfig/network
&&&&NETWORKING=yes
&&&&HOSTNAME=slaver
3、配置/ect/hosts,三台服务器的配置一样
vi /etc/hosts
192.168.21.128 & master
192.168.21.131 & slaver
192.168.21.130 & slaver2
4、创建用户(使用root用户创建后来发现Browse the filesystem 报错,后来查文档,建议使用新建的用户)
useradd &hadoop&
passwd hadoop
输入密码,确认
5、ssh无密码登录
6、HADOOP的下载和环境的配置
/apache/hadoop/common/hadoop-2.2.0/
[ & ] hadoop-2.2.0.tar.gz & & & & 07-Oct- &104M&
hadoop环境变量的配置:
vi/etc/profile
在文件的最下面添加
export HADOOP_HOME=/usr/zkt/hadoop2.2.0/hadoop-2.2.0
export PAHT=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_LOG_DIR=/usr/zkt/hadoop2.2.0/hadoop-2.2.0/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
注:在64位操作系统上需要加入一下红色部分的配置信息
网上看到的另一种解决办法:
在使用./sbin/start-dfs.sh或./sbin/start-all.sh启动时会报出这样如下警告:
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
Java: ssh: Could not resolve hostname Java: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
这个问题的错误原因会发生在64位的操作系统上,原因是从官方下载的hadoop使用的本地库文件(例如lib/native/libhadoop.so.1.0.0)都是基于32位编译的,运行在64位系统上就会出现上述错误。
解决方法之一是在64位系统上重新编译hadoop,另一种方法是在hadoop-env.sh和yarn-env.sh中添加如下两行:&
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native &
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" &
注:/usr/zkt/hadoop2.2.0/hadoop-2.2.0为自定义的下载hadoop文件的解压路径
7、修改hadoop的配置文件hadoop2.2.0/etc/hadoop
& & 1、修改hadoop-env.sh 、yarn-env.sh 确保hadoop运行所需要的java环境
&&&&# The java implementation to use.
&&&&export JAVA_HOME=/usr/java/jdk1.7.0_55
& & 2、修改core-site.xml文件 &定义文件系统的配置
&configuration&
&&property& &
& & &name&fs.default.name&/name& &
& & & & &value&hdfs://master:9000/&/value& &
&&/property& &
&&property&
& & & & &&name&hadoop.tmp.dir&/name&
& & & & &&value&/usr/zkt/hadoop2.2.0/tmp&/value&
& &/property& &
&&/configuration&
& 3、修改hadfs-site.xml &定义名称节点和数据节点
&&&configuration&
&property& &
& & & & &name&dfs.datanode.data.dir&/name& &
& & & & &value&/usr/zkt/hadoop2.2.0/hdf/data&/value& &
& & & & &final&true&/final& &
& &&/property& &
& & &&property& &
& & & &&name&dfs.namenode.name.dir&/name& &
& & & &&value&/usr/zkt/hadoop2.2.0/hdf/name&/value& &
& & & &&final&true&/final& &
& &&/property& &
& &&property& &
& & & & &name&dfs.replication&/name& &
& & & & &value&2&/value& &
& &&/property&
&&property& &
& & & & & & &name&dfs.permissions&/name& &
& & & & & & &value&false&/value& &
& & &/property&&
&/configuration&
4、修改mapred-site.xml &&Configurations for MapReduce Applications
& &&property& &
& & & & &name&mapreduce.framework.name&/name& &
& & & & &value&yarn&/value& &
& & &/property& &
& &&&property& &
& & & & &name&mapreduce.jobhistory.address&/name& &
& & & & &value&master:10020&/value& &
& & &/property& &
&&&property& &
& & & & &name&mapreduce.jobhistory.webapp.address&/name& &
& & & & &value&master:19888&/value& &
& & &/property& &
& 5、修改yarn-site.xml文件 & &
& & &该文件主要用于:
& & &1、Configurations for ResourceManager and NodeManager:
& & &2、Configurations for ResourceManager:
& & &3、Configurations for NodeManager:
&&&&4、Configurations for History Server (Needs to be moved elsewhere):
&&&&&property& &
& & & & &name&yarn.nodemanager.aux-services.mapreduce.shuffle.class&/name& &
& & & & &value&org.apache.hadoop.mapred.ShuffleHandler&/value& &
& & &/property& &
&&property& &
& & & & &name&yarn.resourcemanager.address&/name& &
& & & & &value&master:8032&/value& &
& & &/property& &
&&&&&property& &
& & & & &name&yarn.resourcemanager.scheduler.address&/name& &
& & & & &value&master:8030&/value& &
& & &/property& &
&&&&&property& &
& & & & &name&yarn.resourcemanager.resource-tracker.address&/name& &
& & & & &value&master:8031&/value& &
& & &/property& &
&&&&&property& &
& & & & &name&yarn.resourcemanager.admin.address&/name& &
& & & & &value&master:8033&/value& &
& & &/property& &
&&&&& &property& &
& & & & &name&yarn.resourcemanager.webapp.address&/name& &
& & & & &value&master:8088&/value& &
& & &/property&&
8、创建第7步配置文件中多出的文件夹
& &data &tmp &name &log & &mkdir -r &/usr/zkt/hadoop2.2.0/hdf/data 等
9、为这些文件夹赋权限 比较关键 不然文件生成的时候无文件创建、写权限
& &su - root
& chown -R hadoop:hadoop /usr/zkt/hadoop2.2.0 (不明白的可以查看chown命令)
&&&&或者切换到hadoop用户下 通过chmod -R 777 data 赋权限
10、将配置好的hadoop分别拷贝到 slaver 和slaver2主机上
& & scp -r &/usr/zkt/hadoop2.2.0/hadoop-2.2.0 & &hadoop@slaver:/usr/zkt/hadoop2.2.0/
&&&&&scp -r &/usr/zkt/hadoop2.2.0/hadoop-2.2.0 & &hadoop@slaver2:/usr/zkt/hadoop2.2.0/
11、hadoop namenode的初始化
& & & 如果 hadoop环境变量配置没问题直接使用
& & & hdfs namenode -format&
& & && hadoop command not found&解决办法:
&&&&&&&&echo $PATH&
&&&&&&&&发现hadoop的环境变量是:/home/hadoop/bin 而不是我们配置的环境变量,我们需要把hadoop-2.2.0包下的bin、sbin文件夹拷贝到/home/hadoop/下,再次echo $PATH,发现可以了。
12、关闭防火墙 &三台服务器的防火墙都需要关闭
查看iptables状态:
service iptables status
iptables开机自动启动:&
开启: chkconfig iptables on关闭: chkconfig iptables off
iptables关闭服务:
开启: service iptables start关闭: service iptables stop
13、启动hadoop
& &start-all.sh
&&&&关闭hadoop
& & &stop-all.sh
14、查看启动的节点进程
&&&&&&&&&jps
15、查看启动后的服务信息
master中应该有ResourceManager服务,slave中应该有nodemanager服务
查看集群状态:./bin/hdfs dfsadmin –report
查看文件块组成: &./bin/hdfsfsck / -files -blocks
查看各节点状态: & &http://master:50070
更多开发者职位上
1)">1)">1" ng-class="{current:{{currentPage==page}}}" ng-repeat="page in pages"><li class='page' ng-if="(endIndex<li class='page next' ng-if="(currentPage
相关文章阅读Essentials
Get Support
Get Involved
Subprojects
Apache Debugging Guide
This document is a collection of notes regarding tools and techniques for
debugging Apache httpd and its modules.
Got more tips? Send 'em to
If you use the gcc compiler, it is likely that the best debugger for your
system is gdb. This is only a brief summary of how to run gdb on Apache --
you should look at the info and man files for gdb to get more information
on gdb commands and common debugging techniques. Before running gdb, be
sure that the server is compiled with the -g option in CFLAGS to
include the symbol information in the object files.
The only tricky part of running gdb on Apache is forcing the server into a
single-process mode so that the parent process being debugged does the
request-handling work instead of forking child processes. We have provided
the -X option for that purpose, which will work fine for most cases.
However, some modules don't like starting up with -X , but are happy if
you force only one child to run (using " MaxClients 1 "); you can then
use gdb's attach command to debug the child server.
The following example, with user input in green,
shows the output of gdb run on a server executable (httpd) in the current
working directory and using the server root of /usr/local/apache :
% gdb httpd
GDB is free software and you are welcome to distribute copies of it
unde type "show copying" to see the conditions.
There is absolutely no warranty for GDB; type "show warranty" for
GDB 4.16.gnat.1.13 (sparc-sun-solaris2.5),
Copyright 1996 Free Software Foundation, Inc...
(gdb) b ap_process_request
Breakpoint 1 at 0x49fb4: file http_request.c, line 1164.
(gdb) run -X -d /usr/local/apache
Starting program: /usr/local/apache/src/httpd -X -d /usr/local/apache
[at this point I make a request from another window]
Breakpoint 1, ap_process_request (r=0x95250) at http_request.c:1164
if (ap_extended_status)
ap_time_process_request(r-&gt;connection-&child_num,...
process_request_internal(r);
process_request_internal (r=0x95250) at http_request.c:1028
if (!r-&gt;proxyreq && r-&parsed_uri.path) {
access_status = ap_unescape_url(r-&parsed_uri.path);
if (access_status) {
ap_getparents(r-&gt;uri);
if ((access_status = location_walk(r))) {
if ((access_status = ap_translate_name(r))) {
if (!r-&gt;proxyreq) {
if (r-&method_number == M_TRACE) {
if (r-&gt;proto_num & HTTP_VERSION(1,0) &&
if ((access_status = directory_walk(r))) {
directory_walk (r=0x95250) at http_request.c:288
core_server_config *sconf = ap_get_module_...
(gdb) b ap_send_error_response
Breakpoint 2 at 0x47dcc: file http_protocol.c, line 2090.
Continuing.
Breakpoint 2, ap_send_error_response (r=0x95250, recursive_error=0)
at http_protocol.c:2090
BUFF *fd = r-&connection-&
(gdb) where
ap_send_error_response (r=0x95250, recursive_error=0)
at http_protocol.c:2090
0x49b10 in ap_die (type=403, r=0x95250) at http_request.c:989
0x49b60 in decl_die (status=403, phase=0x62db8 &check access&,
r=0x95250)
at http_request.c:1000
0x49f68 in process_request_internal (r=0x95250) at
http_request.c:1141
0x49fe0 in ap_process_request (r=0x95250) at http_request.c:1167
0x439d8 in child_main (child_num_arg=550608) at http_main.c:3826
0x43b5c in make_child (s=0x7c3e8, slot=0, now=)
at http_main.c:3898
0x43ca8 in startup_children (number_to_start=6) at http_main.c:3972
0x44260 in standalone_main (argc=392552, argv=0x75800) at
http_main.c:4250
0x449fc in main (argc=4, argv=0xefffee8c) at http_main.c:4534
int status = r-&
(gdb) p status
There are a few things to note about the above example:
the " gdb httpd " command does not include any command-line options
for httpd: those are provided when the " run " command is done within
I set a breakpoint before starting the run so that execution would stop
at the top of ap_process_request();
the " s " command steps through the code and into called procedures,
whereas the " n " (next) command steps through the code but not into
called procedures.
additional breakpoints can be set with the " b " command, and the run
continued with the " c " command.
use the " where " command (a.k.a. " bt ") to see a stack backtrace
that shows the order of called procedures and their parameter values.
use the " p " command to print the value of a variable.
A file in the the root directory called .gdbinit provides useful macros
for printing out various internal structures of httpd like tables (
dump_table ), brigades ( dump_brigade ) and filter chains (
dump_filters ).
If you are debugging a repeatable crash, simply run gdb as above and make
the request -- gdb should capture the crash and provide a prompt where it
If you are debugging an apparent infinite loop, simply run gdb as above and
type a Control-C -- gdb will interrupt the process and provide a prompt
where it was stopped.
If you are debugging a system crash and you have a core file from the
crash, then do the following:
% gdb httpd -c core
(gdb) where
and it will (hopefully) print a stack backtrace of where the core dump
occurred during processing.
Getting a live backtrace on unix
A backtrace will let you know the hierarchy of procedures that were called
to get to a particular point in the process. On some platforms you can get
a live backtrace of any process.
For SVR4-based variants of Unix, the pstack command for proc can be used
to display a a live backtrace. For example, on Solaris it looks like
% /usr/proc/bin/pstack 10623
httpd -d /usr/local/apache
ef5b68d8 poll
(efffcd08, 0, 3e8)
ef5d21e0 select
(0, ef612c28, 0, 0, 3e8, efffcd08) + 288
wait_or_timeout (0, 7, 7c3e8, 60f40, 52c00) + 78
standalone_main (5fd68, 7, 7) + 240
(3, efffeee4, efffeef4, 75fe4, 1, 0) + 374
000162fc _start
(0, 0, 0, 0, 0, 0) + 5c
Another technique is to use gdb to attach to the running process and then
using "where" to print the backtrace, as in
% gdb httpd 10623
GDB is free software and you are welcome to distribute copies of it
unde type "show copying" to see the conditions.
There is absolutely no warranty for GDB; type "show warranty" for
GDB 4.16.gnat.1.13 (sparc-sun-solaris2.5),
Copyright 1996 Free Software Foundation, Inc...
/usr/local/apache/src/10623: No such file or directory.
Attaching to program `/usr/local/apache/src/httpd&#39;, process 10623
Reading symbols from /usr/lib/libsocket.so.1...done.
Reading symbols from /usr/lib/libnsl.so.1...done.
Reading symbols from /usr/lib/libc.so.1...done.
Reading symbols from /usr/lib/libdl.so.1...done.
Reading symbols from /usr/lib/libintl.so.1...done.
Reading symbols from /usr/lib/libmp.so.1...done.
Reading symbols from /usr/lib/libw.so.1...done.
Reading symbols from
/usr/platform/SUNW,Ultra-1/lib/libc_psr.so.1...done.
0xef5b68d8 in
(gdb) where
0xef5b68d8 in
0xef5d21e8 in select ()
0x4257c in wait_or_timeout (status=0x0) at http_main.c:2357
0x44318 in standalone_main (argc=392552, argv=0x75800) at...
0x449fc in main (argc=3, argv=0xefffeee4) at http_main.c:4534
Getting a live backtrace on Windows
Unzip the -symbols.zip files (obtained from the Apache download site)
in the root Apache2 directory tree (where bin\, htdocs\, modules\ etc. are
usually found.) These.pdb files should unpack alongside the.exe,.dll,.so
binary files they represent, e.g., mod_usertrack.pdb will unpack alongside
mod_usertrack.so.
Invoke drwtsn32 and ensure you are creating a crash dump file, you are
dumping all thread contexts, your log and crash dump paths make sense, and
(depending on the nature of the bug) you pick an appropriate crash dump
type. (Full is quite large, but necessary sometimes for a programmer-type
to load your crash dump into a debugger and begin unwinding exactly what
has happened. Mini is sufficient for your first pass through the process.)
Note that if you previously installed and then uninstalled other
debugging software, you may need to invoke drwtsn32 -i in order to make
Dr Watson your default crash dump tool. This will replace the 'report
problem to MS' dialogs. (Don't do this if you have a full debugger such as
Visual Studio or windbg installed on the machine, unless you back up the
registry value for Debugger under the HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\AeDebug registry tree. Developers using multiple tools
might want to keep copies of their different tools Debugger entries there,
for fast switching.)
Invoke the Task Manager, Choose 'show processes from all users', and
modify the View -& Select Columns to include at least the PID
and Thread
Count. You can change this just once and Task Manager should keep your
preference.
Now, track down the errant Apache that is hanging. The parent process
has about three threads, we don't care about that one. The child worker
process we want has many more threads (a few more than you configured with
the ThreadsPerChild directive.) The process name is Apache (for 1.3 and
2.0) or httpd (for 2.2 and 2.4). Make note of the child worker's PID.
Using the {pid} number you noted above, invoke the command
drwtsn32 -p {pid}
Voila, you will find in your 'log file path' a drwtsn32.log file, and if
you choose to 'append to existing log file', jump through the 'App:'
sections until you find the one for the process you just killed. Now you
can identify about where 'Stack Back Trace' points to help identify what
the server is doing.
You will note that many threads look identical, almost all of them polling
for the next connection, and you don't care about those. You will want to
see the ones that are deep inside of a request at the time you kill them,
and only the stack back trace entries for those. This can give folks a clue
of where that request is hanging, which handler module picked up the
request, and what filter it might be stuck in.
Debugging intermittent crashes
For situations where a child process is crashing intermittently, the server
must be configured and started such that it produces core dumps which can
be analyzed further.
To ensure that a core dump is written to a directory which is writable by
the user which child processes run as (such as apache ), the
directive must be added to httpd.conf ; for example:
CoreDumpDirectory /tmp
Before starting up the server, any process limits on core dump file size
for example:
# ulimit -c unlimited
# apachectl start
On some platforms, further steps might be needed to enable core dumps - see
When a child process crashes, a message like the following will be logged
to the error_log:
[Mon Sep 05 13:35:39 2005] [notice] child pid 2027 exit signal Segmentation
fault (11), possible coredump in /tmp
If the text "possible coredump in /tmp" does not appear in the error line,
check that the ulimit was set correctly, that the permissions on the
configured CoreDumpDirectory are suitable and that platform specific
) have been done if needed.
To analyse the core dump, pass the core dump filename on the gdb
command-line, and enter the command bt full at the gdb prompt:
% gdb /usr/local/apache2/bin/httpd /tmp/core.2027
Core was generated by `/usr/local/apache2/bin/httpd -k start'
(gdb) bt full
If attempting to debug a threaded server, for example when using the
worker MPM, use the following gdb command:
(gdb) thread apply all bt full
Using 'truss/trace/strace' to trace system calls and signals
Most Unix-based systems have at least one command for displaying a trace of
system calls and signals as they are accessed by a running process. This
command is called truss on most SVR4-based systems and either trace or
strace on many other systems.
A useful tip for using the truss command on Solaris is the -f option
(often also works with strace ); it tells truss to follow and continue
tracing any child processes forked by the main process. The easiest way to
get a full trace of a server is to do something like:
% truss -f httpd -d /usr/local/apache && outfile
% egrep '^10698:' outfile
to view just the trace of the process id 10698.
If attempting to truss a threaded server, for example when using the
worker MPM, the truss option -l is very useful as it prints also the
LWP id after the process id. You can use something like
% egrep '^10698/1:' outfile
to view just the trace of the process id 10698 and LWP id 1.
Other useful options for truss are
-a to print all command line parameters used for this executable.
-e to print all environment variables used for this executable.
-d to print timestamps.
Getting the server to dump core
Strangely enough, sometimes you actually want to force the server to crash
so that you can get a look at some nutty behavior. Normally this can be
done simply by using the gcore command. However, for security reasons,
most Unix systems do not allow a setuid process to dump core, since the
file contents might reveal something that is supposed to be protected in
Here is one way to get a core file from a setuid Apache httpd process on
Solaris, without knowing which httpd child might be the one to die [note:
it is probably easier to use the MaxClients trick in the first section
# for pid inps -eaf | fgrep httpd | cut -d' ' -f4do
truss -f -l -t\!all -S SIGSEGV -p $pid 2&&1 | egrep SIGSEGV
will halt the process in place upon receipt of a given signal (SIGSEGV in
this case). At this point you can use:
# gcore PID
and then look at the backtrace as discussed above for .
Solaris and coredumps
On Solaris use
make setuid() processes actually dump core. By default a setuid() process
does not dump core. This is the reason why httpd servers started as root
with child processes running as a different user (such as apache ) do not
coredump even when the
directive had been set to an appropriate and writable directory and
has a sufficient size. See also
-bash-3.00# coreadm
global core file pattern: /var/core/core.%f.%p.u%u
global core file content: default
init core file pattern: core
init core file content: default
global core dumps: disabled
per-process core dumps: enabled
global setid core dumps: enabled
per-process setid core dumps: enabled
global core dump logging: disabled
Getting and analyzing a TCP packet trace
This is too deep a subject to fully describe in this documentation. Here
are some pointers to useful discussions and tools:
snoop is a packet sniffer that is part of Solaris.
is a packet sniffer that is available
for Unix-based systems and Windows (
). It is part of many free
Unix-based distributions.
is another packet sniffer that is
available for Unix-based systems and Windows. It has a nice GUI and allows
the analysis of the sniffed data.
is a TCP dump
file analysis tool.
is another one.}

我要回帖

更多关于 tmp是什么文件格式 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信