ELK7.x安装及基本配置

ELK是三个软件名称的首字母,ElasticsearchLogstashKibana Elasticsearch: 是一个搜索引擎,同时还可以进行日志储存。 Logstash: 日志采集工具,类似的还有filebeat,可以将不同的日志类型收集过滤后传送至Elasticsearch Kibana: 是一个图形化查询日志的WEB服务,同时还聚合了看板、发送告警通知等功能

软件安装

# 安装公钥
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -


# apt
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install logstash

# YUM
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cat > /etc/yum.repos.d/logstash.repo << EOF
[logstash-7.x] 
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1 
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1 
autorefresh=1 
type=rpm-md
EOF

yum install logstash elasticsearch kibana

Elasticsearch配置

Elasticsearch单节点配置

/etc/elasticsearch/elasticsearch.yml


# ======================== Elasticsearch Configuration =========================
# ---------------------------------- Cluster -----------------------------------
cluster.name: my-es

# ------------------------------------ Node ------------------------------------
node.name: test-2

# ----------------------------------- Paths ------------------------------------
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

# ----------------------------------- Memory -----------------------------------
#bootstrap.memory_lock: true

# ---------------------------------- Network -----------------------------------
network.host: 172.23.210.23
http.port: 9200

# --------------------------------- Discovery ----------------------------------
discovery.type: single-node # 如果只有一个节点则添加该配置
#discovery.seed_hosts: ["host1", "host2"]
discovery.seed_hosts: ["172.23.210.23"]

# cluster.initial_master_nodes: ["test-2"]
#cluster.initial_master_nodes: ["node-1", "node-2"]

# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true
xpack.security.enabled: true

Elasticsearch集群配置

生成SSL证书

cd /usr/share/elasticsearch/    # es安装目录,根据实际切换
./bin/elasticsearch-certutil ca
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

# 将证书复制到es配置文件目录,并修改权限让es能访问
cp elastic-* /etc/elasticsearch/
cd /etc/elasticsearch/ && chmod 660 elastic-*

/etc/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
# ---------------------------------- Cluster -----------------------------------
cluster.name: my-es # 集群名称,整个集群内一致

# ------------------------------------ Node ------------------------------------
node.name: test-2   # 不同节点名称不一样

# ----------------------------------- Paths ------------------------------------
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

# ----------------------------------- Memory -----------------------------------
#bootstrap.memory_lock: true

# ---------------------------------- Network -----------------------------------
network.host: 172.23.210.23     # 本机IP
http.port: 9200

# --------------------------------- Discovery ----------------------------------
#discovery.seed_hosts: ["host1", "host2"]
discovery.seed_hosts: ["172.23.210.23","172.23.210.24"]

# 可以参与主节点选举的节点
#cluster.initial_master_nodes: ["node-1", "node-2"] 
cluster.initial_master_nodes: ["172.23.210.23","172.23.210.24"]
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

# 机械磁盘在并发 I/O 支持方面比较差,所以我们需要降低每个索引并发访问磁盘的线程数。
# 这个设置允许 max_thread_count + 2 个线程同时进行磁盘操作,也就是设置为 1 允许三个线程。
index.merge.scheduler.max_thread_count: 1

配置Elasticsearch内存

配置文件/etc/elasticsearch/jvm.options

……
# 根据自己服务器的内存配置如下选项,默认4g
-Xms2g
-Xmx2g
……

给Elasticsearch配置密码

先决条件:

  • 如果es是单节点,需要修改elasticsearch.yml配置文件的discovery.type配置为single-nodediscovery.type: single-node

  • 启用或添加xpack.security.enabled: true配置

执行如下命令,会提示依次给elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user用户配置密码

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive|auto  # auto为自动生成密码,interactive为手动

检查节点情况

通过浏览器查看

  • http://ip:9200/_cat/nodes

  • http://ip:9200/_cat/health

删除历史日志

根据官方解释,从Elasticsearch v7.0.0 开始,集群中的每个节点默认限制 1000 个shard,如果你的es集群有3个数据节点,那么最多 3000 shards。

手动或者将下面的命令配置成定时任务

LAST_DATA=$(date -d "-15 days" "+%Y.%m.%d")

curl -XDELETE "http://IP:port/*-'${LAST_DATA}'*"

Logstash配置

/etc/logstash/logstash.yml

path.data: /var/lib/logstash
path.logs: /var/log/logstash

/etc/logastash/conf.d/logstash.conf


# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  file {
    path => ["/var/log/messages"]
    type => "messages"
    start_position => "beginning"
  }
  file {
    path => ["/var/log/secure"]
    type => "secure_log"
    start_position => "beginning"
  }
}

output {
  elasticsearch {
    hosts => ["http://172.23.210.23:9200"]
    index => "messages-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "Pa55w0rd"
  }
}

示例配置

input {
  kafka {
    bootstrap_servers => ["192.168.160.247:9092"]
    topics => ["pro-jsc-gw-core-1"]
    group_id => "pro"
    type => "pro-jsc-gw-core-1"
    consumer_threads => 1
  }
  kafka {
    bootstrap_servers => ["192.168.160.247:9092"]
    topics => ["pro-jsc-core-1"]
    group_id => "pro"
    type => "pro-jsc-core-1"
    consumer_threads => 1
  }
  kafka {
    bootstrap_servers => ["192.168.160.247:9092"]
    topics => ["pro-qsgh5-web-1"]
    group_id => "pro"
    type => "pro-qsgh5-web-1"
    consumer_threads => 1
  }

}

filter {
  if [type] =~ /pro-(jsjxz|jsjjm|szt|jsbwz|mml|mmlbk|pjj|pqs)-web-1/ {
    grok {
      match => { "message" => "%{IPV4:ream_addr} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:http_method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}\" %{DATA:http_host} %{NUMBER:status} %{NUMBER:bytes_sent} \"%{DATA:http_referer}\" \"%{DATA:http_user_agent}\" \"%{IP:client_ip}\"" }
      patterns_dir => "/etc/logstash/patterns/"
      overwrite => [ "message" ]
    }
  }
  if [type] == "pro-szt-web-2" {
    grok {
      match => { "message" => "%{IPV4:ream_addr} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:http_method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}\" %{DATA:http_host} %{NUMBER:status} %{NUMBER:bytes_sent} \"%{DATA:http_referer}\" \"%{DATA:http_user_agent}\" \"%{IP:client_ip}\"" }
      patterns_dir => "/etc/logstash/patterns/"
      overwrite => [ "message" ]
    }
  }
  if [type] =~ /pro-(fht-wap|hxw-wap|hxw-web|report|slwweb|syjext|syjshare|wx|xcz|xgj|kfb|qsgh5)-web-1/ {
    grok {
      match => { "message" => "%{QS}\:\"%{LOGLEVEL:log_level}\"\,%{QS}\:\"\| %{NUMBER:status_code} \| %{SPACE} %{DATA:response_time}.*\|.*%{IP:client_ip}.*\|.*%{WORD:http_method}\s*%{URIPATHPARAM:request}\s*\|.*%{TIMESTAMP_ISO8601:access_time}" }
      overwrite => [ "message" ]
    }
  }
  if [type] == "pro-mml-api-1" {
    grok {
      match => { "message" => "%{IPV4:client_ip}.*\[%{HTTPDATE:time}\].*%{WORD:http_method} %{URIPATH:request} HTTP/%{NUMBER:http_version}.* %{NUMBER:status_code} %{NUMBER:bytes_sent} %{NUMBER:request_time}" }
      overwrite => [ "message" ]
    }
  }
  geoip {
    source => "client_ip"
    target => "geoip"
    database => "/etc/logstash/GeoLite2-City.mmdb"
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
  }

  mutate {
    convert => [ "[geoip][coordinates]", "float" ]
    convert => [ "status","integer" ]
  }

}

output {
  if [type] == "pro-jsc-gw-core-1" {
    elasticsearch {
      hosts => ["http://192.168.160.242:9200","http://192.168.160.247:9200"]
      index => "pro-jsc-gw-core-1-%{+YYYY.MM}"
      user => "elastic"
      password => "passwd"
    }
  }
  if [type] == "pro-thd-core-1" {
    elasticsearch {
      hosts => ["http://192.168.160.242:9200","http://192.168.160.247:9200"]
      index => "pro-thd-core-1-%{+YYYY.MM}"
      user => "elastic"
      password => "passwd"
    }
  }
  if [type] == "pro-qsgh5-web-1" {
    elasticsearch {
      hosts => ["http://192.168.160.242:9200","http://192.168.160.247:9200"]
      index => "pro-qsgh5-web-1-%{+YYYY.MM}"
      user => "elastic"
      password => "passwd"
    }
  }
  stdout {codec => rubydebug}
}

kibana配置

基本配置/etc/kibana/kibana.yml

server.name: "test-03"
elasticsearch.hosts: ["http://172.23.210.23:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "passwd"
server.ssl.enabled: false
# 本地化语言
i18n.locale: "zh-CN"    
#i18n.locale: "en"

# 调整导出报告大小30MB,默认10MB(10485760)。该数值在配置较低的机器上不建议修改
# MB转Bytes网址:https://www.gbmb.org/megabytes
xpack.reporting.csv.maxSizeBytes: 31457280

内存配置/etc/kibana/node.options Kibana 有一个默认内存限制,该限制根据可用内存总量进行扩展。在某些情况下,例如导出较大的日志文件时, 调整内存限制以满足导出需求。

## Node command line options
## See `node --help` and `node --v8-options` for available options
## Please note you should specify one option per line

## max size of old space in megabytes
--max-old-space-size=2048

kibana通过“开发工具”--“控制台”执行ES索引管理操作

# 删除索引
DELETE index_name
# 查看指定索引信息,"number_of_shards" : "1" 表示分片为1,"number_of_replicas" : "1"表示副本为1
GET /pro-jsc-jm-core-1-2022.01

# 修改索引的副本数量
PUT /pro-jsc-jm-core-1-2022.01/_settings
{
  "index" : {
    "number_of_replicas" : 3
  }
}

如下图


# 删除匹配以下关键字的索引
DELETE *-2022.02
delete .reporting-2022-03*
delete demo-mml*2022.08
delete pro-mml*2022.06
DELETE *2022.07

delete logserver-0002-syslog-2022.02

# 查看ES健康状态
GET /_cat/health

# 查看ES节点
GET /_cat/nodes

# 修改索引副本数量
PUT /logserver-0002-syslog-2022.07/_settings
{
  "index" : {
    "number_of_replicas" : 0
  }
}

# 批量修改索引副本数量
PUT /*2022.05/_settings
{
  "index" : {
    "number_of_replicas" : 0
  }
}

# 查看所有索引
GET /_cat/indices?v

get /_cat/indices/pro-api*?v

get /_cat/indices/pro-fng-front-1-2022.04

# 查看pro开头的索引
GET /_cat/indices/pro*2022.06?v

GET /_cat/indices/*2022.08

get /_cat/indices/pro-core-1*?v
get /_cat/indices/pro-core-1-2022.04?v

GET /_cat/indices/demo*?v
get /_cat/indices/demo-*2022.04?v

GET /_cat/indices/*2022.08*?v

# 查看索引健康,状态等信息
GET /_cat/shards/pro-core-1-2022.01

GET /_cat/shards/pro-web-1-2022.01 

# 查看具体索引的详细信息
GET /pro-web-1-2022.01 

# 查看索引信息
GET /pro-core-1-2022.03

部分错误处理

Elasticsearch启动时报错"max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]"

vi /etc/sysctl.conf
	vm.max_map_count = 262144

sysctl -w vm.max_map_count=262144

# 如果使用 docker 生效,则应重新启动
systemctl restart docker

最后更新于