loki

简介

Loki是Grafana Labs团队最新的[[开源]]项目,该项目受Prometheus启发,是一个水平可扩展、高可用性、多用户的日志聚合系统。它被设计成非常经济高效且易于操作,因为它不会为日志内容编制索引,而是为每个日志流编制一组标签。

与其它日志聚合系统相比,Loki具有下面的一些特性:

  • 不对日志进行全文索引。通过存储压缩非结构化日志和仅索引元数据,Loki操作起来会更简单,更省成本。

  • 通过与Prometheus相同的标签记录流对日志进行索引和分组,这使得日志的扩展和操作效率更高。

  • 适合储存Kubernetes Pod日志;诸如Pod标签之类的元数据会被自动删除和编入索引。

  • 受Grafana原生支持。

loki + grafana + promtail的docker-compose 编排文件示例

Docker-loki.yaml

version: "3"

networks:
  loki:

services:
  loki:
    image: grafana/loki:latest
    volumes:
      - /usr/src/loki/conf:/mnt/config
    ports:
      - "3100:3100"
    command: -config.file=/mnt/config/loki-config.yaml
    networks:
      - loki

  promtail:
    image: grafana/promtail:latest
    volumes:
      - /var/log:/var/log
      - /usr/src/loki/conf:/mnt/config
    command: -config.file=/mnt/config/promtail-config.yaml
    networks:
      - loki

  grafana:
    image: grafana/grafana:master
    ports:
      - "3000:3000"
    networks:
      - loki

用于挂载的loki和promtail的配置文件

wget https://raw.githubusercontent.com/grafana/loki/v2.3.0/cmd/loki/loki-local-config.yaml -O loki-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.3.0/clients/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml

loki-config.yaml的内容如下:

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

ingester:
  wal:
    enabled: true
    dir: /tmp/wal
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed
  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h
  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  max_transfer_retries: 0     # Chunk transfers disabled

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h

storage_config:
  boltdb_shipper:
    active_index_directory: /tmp/loki/boltdb-shipper-active
    cache_location: /tmp/loki/boltdb-shipper-cache
    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
    shared_store: filesystem
  filesystem:
    directory: /tmp/loki/chunks

compactor:
  working_directory: /tmp/loki/boltdb-shipper-compactor
  shared_store: filesystem

limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h

chunk_store_config:
  max_look_back_period: 0s

table_manager:
  retention_deletes_enabled: false
  retention_period: 0s

ruler:
  storage:
    type: local
    local:
      directory: /tmp/loki/rules
  rule_path: /tmp/loki/rules-temp
  alertmanager_url: http://localhost:9093
  ring:
    kvstore:
      store: inmemory
  enable_api: true

promtail-config.yaml的内容如下:

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
- job_name: system
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      __path__: /var/log/*
- job_name: system2
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs2
      __path__: /var/log/*/*
- job_name: system3
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs3
      __path__: /var/log/*/*/*/*

使用k8s部署示例

思路: 将loki和promtail的配置文件通过ConfigMap来创建;并在其它的容器中引用。以DaemonSet的方式部署promtail,在每个节点进行日志收集。将Grafana通过Ingress发布至外网进行访问

apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-promtail-config
  namespace: ma
data:
  loki-config.yaml: |
    auth_enabled: false
    server:
      http_listen_port: 3100
      grpc_listen_port: 9096  
    ingester:
      wal:
        enabled: true
        dir: /tmp/wal
      lifecycler:
        address: 127.0.0.1
        ring:
          kvstore:
            store: inmemory
          replication_factor: 1
        final_sleep: 0s
      chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed
      max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h
      chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
      chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
      max_transfer_retries: 0     # Chunk transfers disabled  
    schema_config:
      configs:
        - from: 2020-10-24
          store: boltdb-shipper
          object_store: filesystem
          schema: v11
          index:
            prefix: index_
            period: 24h  
    storage_config:
      boltdb_shipper:
        active_index_directory: /tmp/loki/boltdb-shipper-active
        cache_location: /tmp/loki/boltdb-shipper-cache
        cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
        shared_store: filesystem
      filesystem:
        directory: /tmp/loki/chunks  
    compactor:
      working_directory: /tmp/loki/boltdb-shipper-compactor
      shared_store: filesystem
    limits_config:
      reject_old_samples: true
      reject_old_samples_max_age: 168h  
    chunk_store_config:
      max_look_back_period: 0s  
    table_manager:
      retention_deletes_enabled: false
      retention_period: 0s  
    ruler:
      storage:
        type: local
        local:
          directory: /tmp/loki/rules
      rule_path: /tmp/loki/rules-temp
      alertmanager_url: http://localhost:9093
      ring:
        kvstore:
          store: inmemory
      enable_api: true
    
  promtail-config.yaml: |
    server:
      http_listen_port: 9080
      grpc_listen_port: 0    
    positions:
      filename: /tmp/positions.yaml
    clients:
      - url: http://loki.ma.svc.cluster.local:3100/loki/api/v1/push    # loki在k8s集群中的域名
    scrape_configs:
    - job_name: demo_zoo-logs
      static_configs:
      - targets:
          - localhost
        labels:
          job: demo_zoo-logs
          __path__: /mnt/zoo-logs/demo/*/zookeeper/*/*.log  # 节点挂载到promtail容器中的路径
    - job_name: pro_zoo-logs
      static_configs:
      - targets:
          - localhost
        labels:
          job: pro_zoo-logs
          __path__: /mnt/zoo-logs/pro/*/zookeeper/*/*.log
    - job_name: demo_projectlogs
      static_configs:
      - targets:
          - localhost
        labels:
          job: demo_projectlogs
          __path__: /mnt/projectlogs/demo/*/*/log/*
    - job_name: pro_projectlogs
      static_configs:
      - targets:
          - localhost
        labels:
          job: pro_projectlogs
          __path__: /mnt/projectlogs/pro/*/*/log/*
    
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: loki
  namespace: ma
spec: 
  replicas: 1
  template:
    metadata:
      labels:
        app: loki
    spec:
      containers:
      - image: grafana/loki:latest
        imagePullPolicy: IfNotPresent
        name: loki
        args: ["-config.file=/mnt/config/loki-config.yaml"]
        resources:
          requests:
            memory: "512Mi"
            cpu: "200m"
          limits:
            memory: "4096Mi"
            cpu: "1000m"
        ports:
        - containerPort: 3100
        volumeMounts:
        - name: loki-config
          mountPath: /mnt/config      
      volumes:
      - name: loki-config
        configMap:
          name: loki-promtail-config
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: promtail
  namespace: ma
spec: 
  template:
    metadata:
      labels:
        app: promtail
      name: promtail
    spec:
      containers:
      - name: promtail
        image: grafana/promtail:latest
        imagePullPolicy: IfNotPresent
        args: ["-config.file=/mnt/config/promtail-config.yaml"]
        resources:
          requests:
            memory: "512Mi"
            cpu: "200m"
          limits:
            memory: "4096Mi"
            cpu: "1000m"
        volumeMounts:
        - mountPath: /mnt/config
          name: promtail-config
        - mountPath: /mnt/zoo-logs
          name: zoo-logs
        - mountPath: /mnt/projectlogs
          name: projectlogs
        ports:
        - containerPort: 9093
      volumes:
      - name: promtail-config
        configMap:
          name: loki-promtail-config
      - name: zoo-logs
        hostPath:
          path: /jpdata/data
      - name: projectlogs
        hostPath:
          path: /jpdata/log
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grafana
  namespace: ma
spec:
  replicas: 1
  template:
    metadata:
       labels:
         app: grafana
       name: grafana
    spec:
      containers:
      - image: grafana/grafana:master
        imagePullPolicy: IfNotPresent
        name: grafana
        resources:
          requests:
            memory: "512Mi"
            cpu: "200m"
          limits:
            memory: "4096Mi"
            cpu: "2000m"
        ports:
        - containerPort: 3000
        volumeMounts:
          - mountPath: /var/log/grafana
            name: grafanalogs
            
        ports:
        - containerPort: 3000
      volumes:
      - name: grafanalogs
        hostPath:
          path: /jpdata/log/pro/ma/grafana/log
---
apiVersion: v1
kind: Service
metadata:
  name: loki
  namespace: ma
spec:
  selector:
    app: loki
  type: ClusterIP
  sessionAffinity: None
  ports:
  - name: "3100"
    port: 3100
    targetPort: 3100

---
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: ma
spec:
  selector:
    app: grafana
  type: NodePort
  sessionAffinity: None
  ports:
  - name: "3000"
    port: 3000
    targetPort: 3000
    nodePort: 32700

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: grafana
  namespace: ma
  annotations:
    kubernetes.io/elb.id: 81ab96dd-d0b9-4897-9daa-de2b7d14bd72
    kubernetes.io/elb.ip: 122.112.x.x
    kubernetes.io/elb.port: '80'
spec:
  rules:
    - host: log.example.com
      http:
        paths:
          - path: /
            backend:
              serviceName: grafana
              servicePort: 3000
            property:
              ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH



Promtail 日志匹配正则

在线验证正则表达式的网站regex101

promtail.yaml配置中使用正则的示例

server:
  http_listen_port: 9080
  grpc_listen_port: 0    
positions:
  filename: /mnt/projectlogs/positions.yaml    
clients:
  - url: http://loki.ma.svc.cluster.local:3100/loki/api/v1/push    
scrape_configs:
- job_name: demo_web_projectlogs
  static_configs:
  - targets:
      - localhost
    labels:
      job: demo_web_projectlogs
      __path__: /mnt/projectlogs/*/*web*/log/*
  pipeline_stages:
   - match:
      selector: '{job="demo_web_projectlogs"}'  # 当job名匹配时,则对日志进行一下面的正则匹配
      stages:
      - regex:
          expression: '^(?P<remote_pod_addr>[\w\.]+) - (?P<remote_user>[^ ]*) \[(?P<time_local>.*)\] \"(?P<method>[^ ]*) (?P<request>[^ ]*) (?P<protocol>[^ ]*)\" (?P<access_host>[a-z.]+) (?P<status>[\d]+) (?P<body_bytes_sent>[\d]+) \"(?P<http_referer>[^"]*)\" \"(?P<http_user_agent>[^"]*)\" \"(?P<remote_addr>[\w\.]+)\" (?P<upstream_addr>[^ ]+) (?P<upstream_status>[^ ]+) - (?P<a>.+) \> (?P<response_time>[\w.]+)'
      - labels:
          remote_pod_addr:  # 要显示的标签名,每个一行
          remote_user:
          time_local:
          status:
          response_time:
- job_name: demo_java_projectlogs
  static_configs:
  - targets:
      - localhost
    labels:
      job: demo_java_projectlogs
      __path__: /mnt/projectlogs/*/*core*/log/*
  pipeline_stages:
  - match:
      selector: '{job="demo_java_projectlogs"}'
      stages:
      - regex:
          expression: '^(?P<date>[0-9- :.]+) (?P<level>[INFO|ERROR|WARN]+) (?P<number>[0-9 ---]+) \[(?P<action>.*)\] (?P<commonapi>[a-zA-Z. :]+) (?P<info_body>[\S ]*)'
      - labels:
          date:
          level:
          action:
          info_body:
- job_name: demo_api_projectlogs
  static_configs:
  - targets:
      - localhost
    labels:
      job: demo_api_projectlogs
      __path__: /mnt/projectlogs/*/*api*/log/*
  pipeline_stages:
  - match:
      selector: '{job="demo_api_projectlogs"}'
      stages:
      - regex:
          expression: '^(?P<date>[0-9- :.]+) (?P<level>[INFO|ERROR|WARN]+) (?P<number>[0-9 ---]+) \[(?P<action>.*)\] (?P<commonapi>[a-zA-Z. :]+) (?P<info_body>[\S ]*)'
      - labels:
          date:
          level:
          action:
          info_body:

最后更新于