ELK安装部分错误处理

下载:logstash elasticsearch kibana

解压软件

cd elasticsearch-5.6.3
bin/ elasticsearch &

报以下错误

[2017-10-14T11:27:39,953][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.6.3.jar:5.6.3]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:106) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:195) ~[elasticsearch-5.6.3.jar:5.6.3]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) ~[elasticsearch-5.6.3.jar:5.6.3]
       at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) ~[elasticsearch-5.6.3.jar:5.6.3]

解决方案:

groupadd es      	#增加es组
useradd es -g es -p pwd      	#增加es用户并附加到es组
chown -R es:es elasticsearch-5.1.1      	#给目录权限
su es      	#使用es用户
cd elasticsearch-5.6.3
bin/ elasticsearch &

报以下错误

1:

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

解决办法

vi /etc/security/limits.conf
*           	soft	nofile      	65536
*           	hard	nofile      	131072
*           	soft	nproc       	2048
*           	hard	nproc       	4096

2:

max number of threads [1024] for user [es] is too low, increase to at least [2048]

解决方案:

vi /etc/security/limits.d/90-nproc.conf
修改如下内容:
* soft nproc 1024
#修改为
* soft nproc 2048

3:

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

解决方案:

修改/etc/sysctl.conf配置文件,
cat /etc/sysctl.conf | grep vm.max_map_count
vm.max_map_count=262144

如果不存在则添加

echo "vm.max_map_count=262144" >>/etc/sysctl.conf
sysctl -p #立即生效

4:

system call filters failed to install; check the logs and fix your configuration or disable system call filter                                                                                                                          s at your own risk

解决方案:

在elasticsearch.yml中配置bootstrap.system_call_filter为false,注意要在Memory下面:

bootstrap.memory_lock: false
bootstrap.system_call_filter: false

使用supervisor来控制logstash 下面是Cent OS7版本的,如果是其它版本,访问http://dl.fedoraproject.org/pub/epel/查看

rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
yum install -y supervisor

加载的每个单独脚本文件在/etc/supervisor.d/下,以.ini结尾,例如:elkpro_1.ini elkpro_2.ini,


logstaths-plugin插件,国内因为网络关系,默认的ruby源https://rubygems.org 几乎无法访问,修改logstash目录下的Gemfile和Gemfile.jruby-1.9.lock文件中的https://rubygems.org 为国内的ruby源https://ruby.taobao.org/


logstash 启动报错

[FATAL][logstash.runner      	] An unexpected error occurred! {:error=>#<NameError: undefined local variable or method `dotfile' for #<AwesomePrint::Inspector:0x4c3c406a>>

检查output中的stdout { codec => json } 之前将codec值写成了rubydebug一直报上面的错误,改成json之后错误消失


安装x-pack,elasticsearch 和kibana都要安装安装方法

cd /opt/elasticsearch
./bin/elasticrearch-plugin install x-pack

修改密码,默认用户名和密码是elastic changeme

cur -XPUT -u elastic 'localhost:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "123456"}’
cd /opt/kibana
./bin/kibana-plugin install x-pack

重启上面两个服务 然后修改logstash 配置文件,在input中加入

user => “elastic”
password => “123456”

Kibana使用高德地图,对地图进行汉化 编辑kibana配置文件kibana.yml,在最后添加

tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

然后重启kibana服务

想要可以看到访问地点的信息,还要在logstash收集日志的conf文件中加入geoip配置 例如:

filter {
    geoip {
    source => "clientip" 	//从clientip字段中获取IP
    target => "geoip"    	//将提取的IP放入geoip
    database => "/opt/logstash-5.6.3/GeoLite2-City.mmdb"	//指定geoip 数据库文件,不是必须
    }

logstash配置示例:

input {
    file {
        path => [ "/data/logs/nginx/access.log" ]
        start_position => "beginning"
        ignore_older => 0
    }
}

filter {
    grok {
        patterns_dir => ["/etc/logstash/patterns"]
        match => { "message" => "%{NGINXACCESS}" }
    }

	geoip {
	    source => "clientip"
	    target => "geoip"
	    database => "/etc/logstash/GeoLiteCity.dat"
	    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
	    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    }

    mutate {
	    convert => [ "[geoip][coordinates]", "float" ]
	    convert => [ "response","integer" ]
	    convert => [ "bytes","integer" ]
	    replace => { "type" => "nginx_access" }
	      remove_field => "message"
    }

    date {
	    match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]

    }
    mutate {
	    remove_field => "timestamp"

    }


}
output {
    elasticsearch {
        hosts => ["10.31.136.230:9200"]
        index => "logstash-nginx-access-%{+YYYY.MM.dd}"
    }
    stdout {codec => rubydebug}
}

input multiline插件 示例:

input {
    stdin {
        codec => multiline{
                pattern => "^\["	#以“[”开头的日志内容做为下一条日志的开始,以其它任意字符开头都算做上一行。可以用.*javax.*来匹配,.*表示任意字符
                pattern => "\{\"log\"\:\"javax"
                negate => true
                what => "previous"
                        }
            }
}

grok
    grok	{
        match => { "message" => "%{IPV4:clientip} %{USERNAME:username} %{USERNAME:username} \[%{HTTPDATE:time}] \"%{WORD:verb} %{UNIXPATH:url} HTTP/%{NUMBER:http_version}\" %{NUMBER:status} %{NUMBER:size} \"%{URI:referrer}\" %{QUOTEDSTRING:agent}" }
            }

        match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:hostname} %{PROG:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:info }" } #过滤/var/log/messages文件
                                (?<datetime>\d\d\d\d/\d\d/\d\d \d\d:\d\d:\d\d) \[(?<errtype>\w+)\] \S+: \*\d+ (?<errmsg>[^,]+), (?<errinfo>.*)$

kv 分割插件

kv	{
	source => "request"
	field_split => "&"
	value_split => "="
	remove_field => ["ip"]

	}

data 统一时间格式,例如:

date	{
  match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
		}

geoip

geoip {
		source => "clientip"
#		target => "geoip"		#将前面clientip指定一个目标
#		database => "/etc/logstash/GeoLiteCity.dat"	#指定IP数据库
}

docker将日志写入系统的message

docker run --log-driver syslog -d -p 8097:8080 -v /opt/upload-web:/opt/tomcat/webapps -v /data:/data -v /tmp/tomcat-logs:/logs --name up80977 tomcat7/jdk7	#重点是--log-driver syslog

配置文件示例:

input {
	file {
		path => ["/opt/nginx/logs/access.log"]
		codec => "json"
		type => "nginx_access"
		start_position => "beginning"
		sincedb_path => "/dev/null"	##每从都从头读取,测试时用
	}
	file {
		path => ["/opt/nginx/logs/error.log"]
		type => "nginx_error"
		start_position => "beginning"
	}
	file {
		path => ["/var/log/messages"]
		type => "messages"
	}
}

filter {
#过滤messages日志,对其进行字段记录
	if [type] == "messages" {
	grok {
		match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:hostname} %{PROG:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:info }" }
	}
	}
#过滤nginx error日志,显示指定字段
	if [type] == "nginx_error" {
		grok {
			match => {"message" => "(?<datetime>\d\d\d\d/\d\d/\d\d \d\d:\d\d:\d\d) \[%{WORD:error_level}\] %{NUMBER:nginx_pid}\#%{NUMBER:processid}\: \*%{NUMBER} %{WORD}\(\) \"%{UNIXPATH:url}\" %{WORD:error_type} %{DATA:error_info}\, %{WORD}: %{IPORHOST:clientip}, %{WORD}: %{IPORHOST:servername}, %{WORD}\: \"%{WORD:verb} %{UNIXPATH} HTTP\/%{NUMBER:http_version}\"\, %{WORD}: \"%{IPORHOST:server_ip_add}"}
		}
	}
}

output {
	elasticsearch {	hosts => ["localhost:9200"] 
	index => "nginx_%{+YYYY.MM.dd}"	#索引,必须小写,不能有 " * \ < | , > / ? 这些特殊符号
	}
	stdout { codec => rubydebug }


}

#过滤时间并命名为logdate,然后将logdate匹配保存到timestamp中,并按照 yyyy-MM-dd HH:mm:ss,SSS 格式
filter {
	grok {
		match => ["message", "%{TIMESTAMP_ISO8601:logdate}"]
	}
	date {
		match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS"]
		target => "@timestamp"
	}
}

logstash配置示例:

input {
		file {
				type => "huachen_access"
				path => ["/opt/www/logs/nginx/huachen.access.log"]
		}
}

filter {
	grok {
		match => {
			"message" => "%{IPORHOST:clientip}
 \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:http_status_code} %{NUMBER:bytes} \"(?<http_referer>\S+)\" \"(?<http_user_age
nt>\S+)\" \"(?<http_x_forwarded_for>\S+)\""
		}
	}
}

output {
	  if   [type] == "huachen_access"{
		elasticsearch {
				hosts => "ip地址:9200"
				index => "huachen_access-%{+YYYY.MM.dd}"
		}
		stdout {
			codec => rubydebug
		}
	  }
}

最后更新于