graylog 2.x用什么3e21j0x那个版本好用el

日志管理工具总览先看看
中,国外程序员整理的日志聚合工具的列表:日志管理工具:收集,解析,可视化Elasticsearch - 一个基于Lucene的文档存储,主要用于日志索引、存储和分析。Fluentd - 日志收集和发出Flume -分布式日志收集和聚合系统Graylog2 -具有报警选项的可插入日志和事件分析服务器Heka -流处理系统,可用于日志聚合Kibana - 可视化日志和时间戳数据Logstash -管理事件和日志的工具Octopussy -日志管理解决方案(可视化/报警/报告)Graylog与ELK方案的对比ELK: Logstash -& Elasticsearch -& KibanaGraylog: Graylog Collector -& Graylog Server(封装Elasticsearch) -& Graylog Web之前试过Flunted + Elasticsearch + Kibana的方案,发现有几个缺点:不能处理多行日志,比如Mysql慢查询,Tomcat/Jetty应用的Java异常打印不能保留原始日志,只能把原始日志分字段保存,这样搜索日志结果是一堆Json格式文本,无法阅读。不复合正则表达式匹配的日志行,被全部丢弃。本着解决以上3个缺点的原则,再次寻找替代方案。首先找到了商业日志工具Splunk,号称日志界的Google,意思是全文搜索日志的能力,不光能解决以上3个缺点,还提供搜索单词高亮显示,不同错误级别日志标色等吸引人的特性,但是免费版有500M限制,付费版据说要3万美刀,只能放弃,继续寻找。最后找到了Graylog,第一眼看到Graylog,只是系统日志syslog的采集工具,一点也没吸引到我。但后来深入了解后,才发现Graylog简直就是开源版的Splunk。我自己总结的Graylog吸引人的地方:一体化方案,安装方便,不像ELK有3个独立系统间的集成问题。采集原始日志,并可以事后再添加字段,比如http_status_code,response_time等等。自己开发采集日志的脚本,并用curl/nc发送到Graylog Server,发送格式是自定义的GELF,Flunted和Logstash都有相应的输出GELF消息的插件。自己开发带来很大的自由度。实际上只需要用inotifywait监控日志的modify事件,并把日志的新增行用curl/netcat发送到Graylog Server就可。搜索结果高亮显示,就像google一样。搜索语法简单,比如: source:mongo AND reponse_time_ms:&5000,避免直接输入elasticsearch搜索json语法搜索条件可以导出为elasticsearch的搜索json文本,方便直接开发调用elasticsearch rest api的搜索脚本。Graylog图解Graylog开源版官网: &;来几张官网的截图:
1.架构图2.屏幕截图3.部署图最小安装:生产环境安装:Graylog服务器安装包括四块内容:mongodbelasticsearchgraylog-servergraylog-web以下环境是CentOS 6.6,服务器ip是10.0.0.11,已安装jre-1.7.0-openjdkmongodb[root@logserver yum.repos.d]# vim /etc/yum.repos.d/mongodb-org-3.0.repo
[mongodb-org-3.0]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/
gpgcheck=0
[root@logserver yum.repos.d]# yum install -y mongodb-org
[root@logserver yum.repos.d]# vi /etc/yum.conf
最后一行添加:
exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools
[root@logserver yum.repos.d]# service mongod start
[root@logserver yum.repos.d]# chkconfig mongod on
[root@logserver yum.repos.d]# vi /etc/security/limits.conf
最后一行添加:
[root@logserver ~]# vi /etc/init.d/mongod
ulimit -f unlimited 行前插入:
if test -f /sys/kernel/mm/transparent_hugepage/ then
echo never & /sys/kernel/mm/transparent_hugepage/enabled
if test -f /sys/kernel/mm/transparent_hugepage/ then
echo never & /sys/kernel/mm/transparent_hugepage/defrag
[root@logserver ~]# /etc/init.d/mongod restartelasticsearchElasticsearch的最新版是1.6.0[root@logserver ~]# rpm --import [root@logserver ~]# vi /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-1.5]
name=Elasticsearch repository for 1.5.x packages
baseurl=http://packages.elastic.co/elasticsearch/1.5/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
[root@logserver ~]# yum install elasticsearch
[root@logserver ~]# chkconfig --add elasticsearch
[root@logserver ~]# vi /etc/elasticsearch/elasticsearch.yml
32 cluster.name: graylog
[root@logserver ~]# /etc/init.d/elasticsearch start
[root@logserver ~]# curl localhost:9200graylogGraylog的最新版是1.1.4,下载链接如下:
[root@logserver ~]# wget [root@logserver ~]# wget
[root@logserver ~]# rpm -ivh graylog-server-1.0.2-1.noarch.rpm
[root@logserver ~]# rpm -ivh graylog-web-1.0.2-1.noarch.rpm
[root@logserver ~]# /etc/init.d/graylog-server start
Starting graylog-server:
启动失败!
[root@logserver ~]# cat /var/log/graylog-server/server.log
T15:53:14.962+08:00 INFO
[CmdLineTool] Loaded plugins: []
T15:53:15.032+08:00 ERROR [Server] No password secret set. Please define password_secret in your graylog2.conf.
T15:53:15.033+08:00 ERROR [CmdLineTool] Validating configuration file failed - exiting.
[root@logserver ~]# yum install pwgen
[root@logserver ~]# pwgen -N 1 -s 96
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
[root@logserver ~]# echo -n 123456 | sha256sum
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[root@logserver ~]# vi /etc/graylog/server/server.conf
11 password_secret = zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
22 root_password_sha2 = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
152 elasticsearch_cluster_name = graylog
[root@logserver ~]# /etc/init.d/graylog-server restart
启动成功!
[root@logserver ~]# /etc/init.d/graylog-web start
Starting graylog-web:
启动失败!
[root@logserver ~]# cat /var/log/graylog-web/application.log
T15:53:22.960+08:00 - [ERROR] - from lib.Global in main
Please configure application.secret in your conf/graylog-web-interface.conf
T16:25:55.343+08:00 - [ERROR] - from lib.Global in main
Please configure application.secret in your conf/graylog-web-interface.conf
[root@logserver ~]# pwgen -N 1 -s 96
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
[root@logserver ~]# vi /etc/graylog/web/web.conf
2 graylog2-server.uris="http://127.0.0.1:12900/"
12 application.secret="yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy"
注意:/etc/graylog/web/web.conf中的graylog2-server.uris值必须与/etc/graylog/server/server.conf中的rest_listen_uri一致
36 rest_listen_uri = ---
[root@logserver ~]# /etc/init.d/graylog-web restart浏览器中输入url:
可以进入graylog登录页,管理员帐号/密码: admin/123456添加日志收集器以admin登录http://10.0.0.11:9000/4.1 进入 System & Inputs & Inputs in Cluster & Raw/Plaintext TCP | Launch new input取名"tcp 5555" 完成创建任何安装nc的Linux机器上执行:echo `date` | nc 10.0.0.11 5555浏览器的http://10.0.0.11:9000/登录后首页,点击第三行绿色搜索按钮,看到一条新消息:Timestamp Source Message
08:49:15.280 10.0.0.157
2015年 05月 22日 星期五 16:48:28 CST说明安装已成功!!4.2 进入 System & Inputs & Inputs in Cluster & GELF HTTP | Launch new input取名"http 12201" 完成创建任何安装curl的Linux机器上执行:curl -XPOST -p0 -d '{"short_message":"Hello there", "host":"example.org", "facility":"test", "_foo":"bar"}'浏览器的http://10.0.0.11:9000/登录后首页,点击第三行绿色搜索按钮,看到一条新消息:Timestamp Source Message
08:49:15.280 10.0.0.157
Hello there说明GELF HTTP Input设置成功!!时区和高亮设置admin帐号的时区:[root@logserver ~]# vi /etc/graylog/server/server.conf
30 root_timezone = Asia/Shanghai
[root@logserver ~]# /etc/init.d/graylog-server restart其他帐号的默认时区:[root@logserver ~]# vi /etc/graylog/web/web.conf
18 timezone="Asia/Shanghai"
[root@logserver ~]# /etc/init.d/graylog-web restart允许查询结果高亮:[root@logserver ~]# vi /etc/graylog/server/server.conf
147 allow_highlighting = true
[root@logserver ~]# /etc/init.d/graylog-server restart修改css颜色补充[root@logserver ~]# cp /usr/share/graylog-web/lib/graylog-web-interface.graylog-web-interface-1.1.4-assets.jar .
[root@logserver ~]# mkdir jar_tmp
[root@logserver ~]# cd jar_tmp
[root@logserver ~]# jar_tmp]$ jar xvf ../graylog-web-interface.graylog-web-interface-1.1.4-assets.jar
[root@logserver ~]# jar_tmp]$ vi public/stylesheets/graylog2.less
2347 font-family:
2348 color: #16ace3;
2347 /*font-family:*/
2348 /*color: #16ace3;*/
[root@logserver ~]# jar_tmp]$
jar cvfm graylog-web-interface.graylog-web-interface-1.1.4-assets.jar META-INF/MANIFEST.MF .[root@logserver ~]# sudo /etc/init.d/graylog-web stop[root@logserver ~]# cd /usr/share/graylog-web/lib/[root@logserver ~]# lib]$ sudo mv graylog-web-interface.graylog-web-interface-1.1.4-assets.jar graylog-web-interface.graylog-web-interface-1.1.4-assets.jar.origin[root@logserver ~]# lib]$ sudo cp ~/jar_tmp/graylog-web-interface.graylog-web-interface-1.1.4-assets.jar .[root@logserver ~]# lib]$ sudo /etc/init.d/graylog-web start7. 移动数据目录
-------------移动elasticsearch的数据目录[root@logserver ~]# sudo /etc/init.d/elasticsearch stop[root@logserver ~]# sudo cp -rp /var/lib/elasticsearch/ /data/[root@logserver ~]# sudo vi /etc/sysconfig/elasticsearch+16 DATA_DIR=/data/elasticsearch[root@logserver ~]# sudo /etc/init.d/elasticsearch start移动mongo的数据目录[root@logserver ~]# sudo /etc/init.d/mongod stop[root@logserver ~]# sudo cp -rp /var/lib/mongo /data/[root@logserver ~]# sudo vi /etc/mongod.conf13 dbpath=/var/lib/mongo-&13 dbpath=/data/mongo[mtagent@access2 ~]$ sudo /etc/init.d/mongod start# 发送日志到Graylog服务器
## 使用http协议发送:
&http://docs.graylog.org/en/1.1/pages/sending_data.html#gelf-via-http&curl -XPOST
-p0 -d '{"short_message":"Hello there", "host":"example.org", "facility":"test", "_foo":"bar"}'## 使用tcp协议发送
&http://docs.graylog.org/en/1.1/pages/sending_data.html#raw-plaintext-inputs&echo "hello, graylog" | nc graylog.example.org 5555## 结合inotifywait收集nginx日志
gather-nginx-log.sh!/bin/bashapp=nginxnode=$HOSTNAMElog_file=/var/log/nginx/nginx.loggraylog_server_ip=10.0.0.11graylog_server_port=12201while inotifywait -e modify $log_ dolast_size=cat ${app}.sizecurr_size=stat -c%s $log_fileecho $curr_size & ${app}.sizecount=echo "$curr_size-$last_size" | bcpython read_log.py $log_file ${last_size} $count | sed 's/"/\\\"/g' & ${app}.new_lineswhile read linedoif echo "$line" | grep "^20[0-9][0-9]-[0-1][0-9]-[0-3][0-9]" & /dev/ thenseconds=echo "$line" | cut -d ' ' -f 6spend_ms=echo "${seconds}*1000/1" | bchttp_status=echo "$line" | cut -d ' ' -f 2echo "http_status -- $http_status"prefix_number=${http_status:0:1}if [ "$prefix_number" == "5" ]; thenlevel=3 #ERRORelif [ "$prefix_number" == "4" ]; thenlevel=4 #WARNINGelif [ "$prefix_number" == "3" ]; thenlevel=5 #NOTICEelif [ "$prefix_number" == "2" ]; thenlevel=6 #INFOelif [ "$prefix_number" == "1" ]; thenlevel=7 #DEBUGfiecho "level -- $level"curl -XPOST
-p0 -d "{\"short_messsage\":\"$line\", \"host\":\"${app}\", \"level\":${level}, \"_node\":\"${node}\", \"_spend_msecs\":${spend_ms}, \"_http_status\":${http_status}}"echo "gathered -- $line"fidone & ${app}.new_linesdone
read_log.py!/usr/bin/pythoncoding=utf-8import sysimport osif len(sys.argv) & 4:print "Usage: %s /path/of/log/file print_from count" % (sys.argv[0])print "Example: %s /var/log/syslog " % (sys.argv[0])sys.exit(1)filename = sys.argv[1]if (not os.path.isfile(filename)):print "%s not existing!!!" % (filename)sys.exit(1)filesize = os.path.getsize(filename)position = int(sys.argv[2])if (filesize & position):print "log file may cut by logrotate.d, print log from begin!" % (position,filesize)position = 0count = int(sys.argv[3])fo = open(filename, "r")fo.seek(position, 0)content = fo.read(count)print content.strip()Close opened filefo.close()## 5秒一次收集iotop日志,找出高速读写磁盘的进程!/bin/bashapp=iotopnode=$HOSTNAMEgraylog_server_ip=10.0.0.11graylog_server_port=12201 dosudo /usr/sbin/iotop -b -o -t -k -q -n2 | sed 's/"/\\\"/g' & /dev/shm/graylog_client.${app}.new_lines doif echo "$line" | grep "^[0-2][0-9]:[0-5][0-9]:[0-5][0-9]" & /dev/ thenread -a WORDS &&& $lineepoch_seconds=date --date="${WORDS[0]}" +%s.%Npid=${WORDS[1]}read_float_kps=${WORDS[4]}read_int_kps=${read_float_kps%.}write_float_kps=${WORDS[6]}write_int_kps=${write_float_kps%.}command=${WORDS[12]}
if [ "$command" == "bash" ] && (( ${#WORDS[*]} & 13 )); then
pname=${WORDS[13]}
elif [ "$command" == "java" ] && (( ${#WORDS[*]} & 13 )); then
arg0=${WORDS[13]}
pname=${arg0#*=}
pname=$command
curl --connect-timeout 1 -s -XPOST -p0 -d "{\"timestamp\":$epoch_seconds, \"short_message\":\"${line::200}\", \"full_message\":\"$line\", \"host\":\"${app}\", \"_node\":\"${node}\", \"_pid\":${pid}, \"_read_kps\":${read_int_kps}, \"_write_kps\":${write_int_kps}, \"_pname\":\"${pname}\"}"
done & /dev/shm/graylog_client.${app}.new_lines
sleep 4done## 收集android app日志
device.envexport device=4b13c85cexport app=com.tencent.mmexport filter="( I/ServerAsyncTask2(| W/| E/)"export graylog_server_ip=10.0.0.11export graylog_server_port=12201adblog.sh!/bin/bash. ./device.envadb -s $device logcat -v time *:I | tee -a adb.logga-androidapp-log.sh!/bin/bash. ./device.envlog_file=./adb.lognode=$deviceif [ ! -f $log_file ]; thenecho $log_file not exist!!echo 0 & ${app}.sizeexit 1fiif [ ! -f ${app}.size ]; thencurr_size=stat -c%s $log_fileecho $curr_size & ${app}.sizefiwhile inotifywait -qe modify $log_file & /dev/ dolast_size=cat ${app}.sizecurr_size=stat -c%s $log_fileecho $curr_size & ${app}.sizepids=./getpids.py $app $deviceif [ "$pids" == "" ]; thencontinueficount=echo "$curr_size-$last_size" | bcpython read_log.py $log_file ${last_size} $count | grep "$pids" | sed 's/"/\\\"/g' | sed 's/\t/    /g' & ${app}.new_linesecho "${app}.new_lines lines: wc -l ${app}.new_lines"while read line
if echo "$line" | grep "$filter" & /dev/ then
priority=${line:19:1}
if [ "$priority" == "F" ]; then
level=1 #ALERT
elif [ "$priority" == "E" ]; then
level=3 #ERROR
elif [ "$priority" == "W" ]; then
level=4 #WARNING
elif [ "$priority" == "I" ]; then
level=6 #INFO
#echo "level -- $level"
curl -XPOST -p0 -d "{\"short_message\":\"$line\", \"host\":\"${app}\", \"level\":${level}, \"_node\":\"${node}\"}"
echo "GATHERED -- $line"
#echo "ignored -- $line"
done & ${app}.new_linesdoneget_pids.py!/usr/bin/pythonimport sysimport osimport commandsif name == "main":if len(sys.argv) != 3:print sys.argv[0]+" packageName device"sys.exit()device = sys.argv[2]cmd = "adb -s "+device+" shell ps | grep "+sys.argv[1]+" | cut -c11-15"output = commands.getoutput(cmd)if output == "":sys.exit()originpids = output.split("\n")strippids = map((lambda pid: int(pid,10)), originpids)pids = map((lambda pid: "%5d" %pid), strippids)pattern = "(("+")|(".join(pids)+"))"print patterngraylog启动脚本
=============[root@logserver init.d]$ cat /etc/init.d/graylog-server ! /bin/sh#graylog-server Starts/stop the "graylog-server" daemon#chkconfig:
- 95 5description: Runs the graylog-server daemonBEGIN INIT INFOProvides:
graylog-serverRequired-Start:
$network $named $remote_fs $syslogRequired-Stop:
$network $named $remote_fs $syslogDefault-Start:
2 3 4 5Default-Stop:
0 1 6Short-Description: Graylog ServerDescription:
Graylog Server - Search your logs, create charts, send reports and be alerted when something happens.END INIT INFOAuthor: Lee Briggs &lee@leebriggs.co.uk&Contributor: Sandro Roth &sandro.&Contributor: Bernd Ahlers &bernd@torch.sh&Source function library.. /etc/rc.d/init.d/functionsRETVAL=0PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/binDESC="Graylog Server"NAME=graylog-serverJAR_FILE=/usr/share/graylog-server/graylog.jarJAVA=/usr/bin/javaPID_DIR=/var/run/graylog-serverPID_FILE=$PID_DIR/$NAME.pidJAVA_ARGS="-jar -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dlog4j.configuration=file:///etc/graylog/server/log4j.xml $JAR_FILE server -p $PID_FILE -f /etc/graylog/server/server.conf"SCRIPTNAME=/etc/init.d/$NAMELOCKFILE=/var/lock/subsys/$NAMEGRAYLOG_SERVER_USER=graylogGRAYLOG_SERVER_JAVA_OPTS=""Pull in sysconfig settings[ -f /etc/sysconfig/${NAME} ] && . /etc/sysconfig/${NAME}Exit if the package is not installed[ -e "$JAR_FILE" ] || exit 0[ -x "$JAVA" ] || exit 0start() {echo -n $"Starting ${NAME}: "install -d -m 755 -o $GRAYLOG_SERVER_USER -g $GRAYLOG_SERVER_USER -d $PID_DIRdaemon --check $JAVA --pidfile=${PID_FILE} --user=${GRAYLOG_SERVER_USER} \"$GRAYLOG_COMMAND_WRAPPER $JAVA $GRAYLOG_SERVER_JAVA_OPTS $JAVA_ARGS $GRAYLOG_SERVER_ARGS &"RETVAL=$?sleep 2[ $RETVAL = 0 ] && touch ${LOCKFILE}echoreturn $RETVAL}stop() {echo -n $"Stopping ${NAME}: "killproc -p ${PID_FILE} -d 10 $JAVARETVAL=$?[ $RETVAL = 0 ] && rm -f ${PID_FILE} && rm -f ${LOCKFILE}echoreturn $RETVAL}case "$1" instart)start;;stop)stop;;status)status -p ${PID_FILE} $NAMERETVAL=$?;;restart|force-reload)stopstart;;*)N=/etc/init.d/${NAME}echo "Usage: $N {start|stop|status|restart|force-reload}" &&2RETVAL=2;;esacexit $RETVAL[root@logserver init.d]$ cat /etc/init.d/graylog-web ! /bin/sh#graylog-web Starts/stop the "graylog-web" application#chkconfig:
- 99 1description: Runs the graylog-web applicationBEGIN INIT INFOProvides:
graylog-webRequired-Start:
$network $named $remote_fs $syslogRequired-Stop:
$network $named $remote_fs $syslogDefault-Start:
2 3 4 5Default-Stop:
0 1 6Short-Description: Graylog WebDescription:
Graylog Web - Search your logs, create charts, send reports and be alerted when something happens.END INIT INFOAuthor: Lee Briggs &lee@leebriggs.co.uk&Contributor: Bernd Ahlers &bernd@torch.sh&Some default settings.GRAYLOG_WEB_HTTP_ADDRESS="0.0.0.0"GRAYLOG_WEB_HTTP_PORT="9000"GRAYLOG_WEB_USER="graylog-web"Source function library.. /etc/rc.d/init.d/functionsRETVAL=0PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/binDESC="Graylog Web"NAME=graylog-webCMD=/usr/share/graylog-web/bin/graylog-web-interfacePID_FILE=/var/lib/graylog-web/application.pidCONF_FILE=/etc/graylog/web/web.confSCRIPTNAME=/etc/init.d/$NAMELOCKFILE=/var/lock/subsys/$NAMERUN=yesPull in sysconfig settings[ -f /etc/sysconfig/graylog-web ] && . /etc/sysconfig/graylog-webExit if the package is not installed[ -e "$CMD" ] || exit 0start() {echo -n $"Starting ${NAME}: "daemon --user=$GRAYLOG_WEB_USER --pidfile=${PID_FILE} \"nohup $GRAYLOG_COMMAND_WRAPPER $CMD -Dconfig.file=${CONF_FILE} \-Dlogger.file=/etc/graylog/web/logback.xml \-Dpidfile.path=$PID_FILE \-Dhttp.address=$GRAYLOG_WEB_HTTP_ADDRESS \-Dhttp.port=$GRAYLOG_WEB_HTTP_PORT \$GRAYLOG_WEB_JAVA_OPTS $GRAYLOG_WEB_ARGS & /var/log/graylog-web/console.log 2&&1 &"RETVAL=$?sleep 2[ $RETVAL = 0 ] && touch ${LOCKFILE}echoreturn $RETVAL}stop() {echo -n $"Stopping ${NAME}: "killproc -p ${PID_FILE} -d 10 $CMDRETVAL=$?[ $RETVAL = 0 ] && rm -f ${PID_FILE} && rm -f ${LOCKFILE}echoreturn $RETVAL}case "$1" instart)start;;stop)stop;;status)status -p ${PID_FILE} $NAMERETVAL=$?;;restart|force-reload)stopstart;;*)N=/etc/init.d/${NAME}echo "Usage: $N {start|stop|status|restart|force-reload}" &&2RETVAL=2;;esacexit $RETVAL[root@logserver init.d]$ cat /etc/graylog/server/log4j.xml
server.conf[root@logserver init.d]$ cat /etc/graylog/server/server.conf If you are running more than one instances of graylog2-server you have to select one of theseinstances as master. The master will perform some periodical tasks that non-masters won't perform.is_master = trueThe auto-generated node ID will be stored in this file and read after restarts. It is a good ideato use an absolute file path here if you are starting graylog2-server from init scripts or similar.node_id_file = /etc/graylog/server/node-idYou MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.Generate one by using for example: pwgen -N 1 -s 96password_secret = Us5hAey50eHzfJSqrnhUnLv8k8I2QV1JbPcNLVRtZ2lZdLF9b5G2jSYflZMc41IaoD4BEH59Zi9Gkplq0nhWvtxUrLFjsyqeThe default root user is named 'admin'root_username = adminYou MUST specify a hash password for the root user (which you only need to initially set up thesystem and in case you lose connectivity to your authentication backend)This password cannot be changed using the API or via the web interface. If you need to change it,modify it in this file.Create one by using for example: echo -n yourpassword | shasum -a 256and put the resulting hash value into the following lineroot_password_sha2 = db81c8c3eda25f1938b6cThe email address of the root user.Default is emptyroot_email = ""The time zone setting of the root user.Default is UTCroot_timezone = Asia/ShanghaiSet plugin directory here (relative or absolute)plugin_dir = /usr/share/graylog-server/pluginREST API listen URI. Must be reachable by other graylog2-server nodes if you run a cluster.rest_listen_uri = REST API transport address. Defaults to the value of rest_listen_uri. Exception: If rest_listen_uriis set to a wildcard IP address (0.0.0.0) the first non-loopback IPv4 system address is used.If set, his will be promoted in the cluster discovery APIs, so other nodes may try to connect onthis address and it is used to generate URLs addressing entities in the REST API. (see rest_listen_uri)You will need to define this, if your Graylog server is running behind a HTTP proxy that is rewritingthe scheme, host name or URI.rest_transport_uri = Enable CORS headers for REST API. This is necessary for JS-clients accessing the server directly.If these are disabled, modern browsers will not be able to retrieve resources from the server.This is disabled by default. Uncomment the next line to enable it.rest_enable_cors = trueEnable GZIP support for REST API. This compresses API responses and therefore helps to reduceoverall round trip times. This is disabled by default. Uncomment the next line to enable it.rest_enable_gzip = trueEnable HTTPS support for the REST API. This secures the communication with the REST API withTLS to prevent request forgery and eavesdropping. This is disabled by default. Uncomment thenext line to enable it.rest_enable_tls = trueThe X.509 certificate file to use for securing the REST API.rest_tls_cert_file = /path/to/graylog2.crtThe private key to use for securing the REST API.rest_tls_key_file = /path/to/graylog2.keyThe password to unlock the private key used for securing the REST API.rest_tls_key_password = secretThe maximum size of a single HTTP chunk in bytes.rest_max_chunk_size = 8192The maximum size of the HTTP request headers in bytes.rest_max_header_size = 8192The maximal length of the initial HTTP/1.1 line in bytes.rest_max_initial_line_length = 4096The size of the execution handler thread pool used exclusively for serving the REST API.rest_thread_pool_size = 16The size of the worker thread pool used exclusively for serving the REST API.rest_worker_threads_max_pool_size = 16Embedded Elasticsearch configuration filepay attention to the working directory of the server, maybe use an absolute path hereelasticsearch_config_file = /etc/graylog/server/elasticsearch.ymlGraylog will use multiple indices to store documents in. You can configured the strategy it uses to determinewhen to rotate the currently active write index.It supports multiple rotation strategies:- "count" of messages per index, use elasticsearch_max_docs_per_index below to configure- "size" per index, use elasticsearch_max_size_per_index below to configurevalid values are "count", "size" and "time", default is "count"rotation_strategy = count(Approximate) maximum number of documents in an Elasticsearch index before a new indexis being created, also see no_retention and elasticsearch_max_number_of_indices.Configure this if you used 'rotation_strategy = count' above.elasticsearch_max_docs_per_index = (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also seeno_retention and elasticsearch_max_number_of_indices. Default is 1GB.Configure this if you used 'rotation_strategy = size' above.elasticsearch_max_size_per_index = (Approximate) maximum time before a new Elasticsearch index is being created, also seeno_retention and elasticsearch_max_number_of_indices. Default is 1 day.Configure this if you used 'rotation_strategy = time' above.Please note that this rotation period does not look at the time specified in the received messages, but isusing the real clock value to decide when to rotate the index!Specify the time using a duration and a suffix indicating which unit you want:1w
= 1 week1d
= 1 day12h = 12 hoursPermitted suffixes are: d for day, h for hour, m for minute, s for second.elasticsearch_max_time_per_index = 1dDisable checking the version of Elasticsearch for being compatible with this Graylog release.WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!elasticsearch_disable_version_check = trueDisable message retention on this node, i. e. disable Elasticsearch index rotation.no_retention = falseHow many indices do you want to keep?elasticsearch_max_number_of_indices = 20Decide what happens with the oldest indices when the maximum number of indices is reached.The following strategies are availble:- delete # Deletes the index completely (Default)- close # Closes the index and hides it from the system. Can be re-opened later.retention_strategy = deleteHow many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.elasticsearch_shards = 4elasticsearch_replicas = 0Prefix for all Elasticsearch indices and index aliases managed by Graylog.elasticsearch_index_prefix = graylog2Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should onlybe enabled with care. See also: allow_leading_wildcard_searches = falseDo you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry andshould only be enabled after making sure your Elasticsearch cluster has enough memory.allow_highlighting = truesettings to be passed to elasticsearch's client (overriding those in the provided elasticsearch_config_file)all thesethis must be the same as for your Elasticsearch clusterelasticsearch_cluster_name = graylogyou could also leave this out, but makes it easier to identify the graylog2 client instanceelasticsearch_node_name = graylog2-serverwe don't want the graylog2 server to store any data, or be master nodeelasticsearch_node_master = falseelasticsearch_node_data = falseuse a different port if you run multiple Elasticsearch nodes on one machineelasticsearch_transport_tcp_port = 9350we don't need to run the embedded HTTP server hereelasticsearch_http_enabled = falseelasticsearch_discovery_zen_ping_multicast_enabled = falseelasticsearch_discovery_zen_ping_unicast_hosts = 192.168.1.203:9300Change the following setting if you are running into problems with timeouts during Elasticsearch cluster discovery.The setting is specified in milliseconds, the default is 5000ms (5 seconds).elasticsearch_cluster_discovery_timeout = 5000the following settings allow to change the bind addresses for the Elasticsearch client in graylog2these settings are empty by default, letting Elasticsearch choose automatically,override them here or in the 'elasticsearch_config_file' if you need to bind to a special addressrefer to for special values hereelasticsearch_network_host =elasticsearch_network_bind_host =elasticsearch_network_publish_host =The total amount of time discovery will look for other Elasticsearch nodes in the clusterbefore giving up and declaring the current node master.elasticsearch_discovery_initial_state_timeout = 3sAnalyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, customElasticsearch documentation: Note that this setting only takes effect on newly created indices.elasticsearch_analyzer = standardBatch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch outputmodule will get at once and write to Elasticsearch in a batch call. If the configured batch size has not beenreached within output_flush_interval seconds, everything that is available will be flushed at once. Rememberthat every outputbuffer processor manages its own batch and performs its own batch write calls.("outputbuffer_processors" variable)output_batch_size = 500Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between twobatches of messages written to Elasticsearch. It is only effective at all if your minimum number of messagesfor this time period is less than output_batch_size * outputbuffer_processors.output_flush_interval = 1As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over andover again. To prevent this, the following configuration options define after how many faults an output willnot be tried again for an also configurable amount of seconds.output_fault_count_threshold = 5output_fault_penalty_seconds = 30The number of parallel running processors.Raise this number if your buffers are filling up.processbuffer_processors = 5outputbuffer_processors = 3outputbuffer_processor_keep_alive_time = 5000outputbuffer_processor_threads_core_pool_size = 3outputbuffer_processor_threads_max_pool_size = 30UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).udp_recvbuffer_sizes = 1048576Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)Possible types:- yieldingCompromise between performance and CPU usage.- sleepingCompromise between performance and CPU usage. Latency spikes can occur after quiet periods.- blockingHigh throughput, low latency, higher CPU usage.- busy_spinningAvoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.processor_wait_strategy = blockingSize of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.Start server with --statistics flag to see buffer utilization.Must be a power of 2. (512, , ...)ring_size = 65536inputbuffer_ring_size = 65536inputbuffer_processors = 2inputbuffer_wait_strategy = blockingEnable the disk based message journal.message_journal_enabled = trueThe directory which will be used to store the message journal. The directory must me exclusively used by Graylog andmust not contain any other files than the ones created by Graylog itself.message_journal_dir = /var/lib/graylog-server/journalJournal hold messages before they could be written to Elasticsearch.For a maximum of 12 hours or 5 GB whichever happens first.During normal operation the journal will be smaller.message_journal_max_age = 12hmessage_journal_max_size = 5gbmessage_journal_flush_age = 1mmessage_journal_flush_interval = 1000000message_journal_segment_age = 1hmessage_journal_segment_size = 100mbNumber of threads used exclusively for dispatching internal events. Default is 2.async_eventbus_processors = 2EXPERIMENTAL: Dead LettersEvery failed indexing attempt is logged by default and made visible in the web-interface. You can enablethe experimental dead letters feature to write every message that was not successfully indexed into theMongoDB "dead_letters" collection to make sure that you never lose a message. The actual writing of deadletter should work fine already but it is not heavily tested yet and will get more features in futurereleases.dead_letters_enabled = falseHow many seconds to wait between marking node as DEAD for possible load balancers and starting the actualshutdown process. Set to 0 if you have no status checking load balancers in front.lb_recognition_period_seconds = 3Every message is matched against the configured streams and it can happen that a stream contains rules whichtake an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting otherstreams, Graylog limits the execution time for each stream.The default values are noted below, the timeout is in milliseconds.If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" timesthat stream is disabled and a notification is shown in the web interface.stream_processing_timeout = 2000stream_processing_max_faults = 3Length of the interval in seconds in which the alert conditions for all streams should be checkedand alarms are being sent.alert_check_interval = 60Since 0.21 the graylog2 server supports pluggable output modules. This means a single message can be written to multipleoutputs. The next setting defines the timeout for a single output module, including the default output module where allmessages end up.#Time in milliseconds to wait for all message outputs to finish writing a single message.output_module_timeout = 10000Time in milliseconds after which a detected stale master node is being rechecked on startup.stale_master_timeout = 2000Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.shutdown_timeout = 30000MongoDB Configurationmongodb_useauth = falsemongodb_user = graylogusermongodb_password = 123mongodb_host = 127.0.0.1mongodb_replica_set = localhost:27017,localhost:27018,localhost:27019mongodb_database = graylog2mongodb_port = 27017Raise this according to the maximum connections your MongoDB server can handle if you encounter MongoDB connection problems.mongodb_max_connections = 100Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5, then 500 threads can block. More than that and an exception will be thrown.mongodb_threads_allowed_to_block_multiplier = 5Drools Rule File (Use to rewrite incoming log messages)See: rules_file = /etc/graylog/server/rules.drlEmail transporttransport_email_enabled = falsetransport_email_hostname = transport_email_port = 587transport_email_use_auth = truetransport_email_use_tls = truetransport_email_use_ssl = truetransport_email_auth_username = transport_email_auth_password = secrettransport_email_subject_prefix = [graylog2]transport_email_from_email = Specify and uncomment this if you want to include links to the stream in your stream alert mails.This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.transport_email_web_interface_url = HTTP proxy for outgoing HTTP callshttp_proxy_uri =Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearchon heavily used systems with large indices, but it will decrease search performance. The default is to optimizecycled indices.disable_index_optimization = trueOptimize the index down to &= index_optimization_max_num_segments. A higher number may take some load from Elasticsearchon heavily used systems with large indices, but it will decrease search performance. The default is 1.index_optimization_max_num_segments = 1Disable the index range calculation on all open/available indices and only calculate the range for the latestindex. This may speed up index cycling on systems with large indices but it might lead to wrong search resultsin regard to the time range of the messages (i. e. messages within a certain range may not be found). The defaultis to calculate the time range on all open/available indices.disable_index_range_calculation = trueThe threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notificationwill be generated to warn the administrator about possible problems with the system. Default is 1 second.gc_warning_threshold = 1sConnection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.ldap_connection_timeout = 2000groovy_shell_enable = falsegroovy_shell_port = 6789Enable collection of Graylog-related metrics into MongoDBenable_metrics_collection = falseDisable the use of SIGAR for collecting system statsdisable_sigar = false[root@logserver init.d]$ cat /etc/graylog/web/web.conf graylog2-server REST URIs (one or more, comma separated) For example: ""graylog2-server.uris=""Learn how to configure custom logging in the documentation:Secret key~The secret key is used to secure cryptographics functions. Set this to a long and randomly generated string.If you deploy your application to several instances be sure to use the same key!Generate for example with: pwgen -N 1 -s 96application.secret="Vio48oiufs4TD6XBN0PXZT2FvPmfs1L3BvbByvo7Pwwz7mUyR0HUlMspNxdQ8dKdHpSwmh67cbkISlPs9cmzqTkJXVHFrI9P"Web interface timezoneGraylog stores all timestamps in UTC. To properly display times, set the default timezone of the interface.If you leave this out, Graylog will pick your system default as the timezone. Usually you will want to configure it explicitly.timezone="Europe/Berlin"timezone="Asia/Shanghai"Message field limitYour web interface can cause high load in your browser when you have a lot of different message fields. The defaultlimit of message fields is 100. Set it to 0 if you always want to get all fields. They are for example used in thesearch result sidebar or for autocompletion of field names.field_list_limit=100Use this to run Graylog with a path prefixapplication.context=/graylog2You usually do not want to change this.application.global=lib.Global}

我要回帖

更多关于 mac os x哪个版本好用 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信