Logstash периодически отключается в контейнере докеров

Я использовал docker-compose для запуска ELKB. Моя основная цель — запустить контейнеры elasticsearch и logstash. Контейнер Logstash должен быть успешно подключен к elasticsearch и передавать журналы в elasticsearch для дальнейшего поиска или обработки.

Но по незнанию причины контейнер logstash должен часто останавливаться. Мне нужно сохраниться в контейнере logstash и elasticsearch, но этого не происходит.

Я не знаю, в чем причина периодического закрытия контейнера logstash.

Я использую elasticsearch: 7.6.3 и logstash: 7.6.3.

Пожалуйста, просмотрите приведенный ниже код и укажите, где я допустил ошибку.

docker-compose.yml

# Docker version 19.03.5
# docker-compose version 1.25.3
version: "3.7"
services:
  elasticsearch:
    container_name: elasticsearch
    build:
      context: ./elasticsearch
      dockerfile: Dockerfile
    ports:
      - 9200:9200
      - 9300:9300
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data:rw
      - ./elasticsearch/logs:/usr/share/elasticsearch/logs:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - elkb
  logstash:
    container_name: logstash
    build:
      context: ./logstash
      dockerfile: Dockerfile
    ports:
      - 9600:9600
      - 5000:5000/udp
      - 5000:5000/tcp
    volumes:
      - ./logstash/input-logs:/usr/share/logstash/logs
      - ./logstash/data:/var/lib/logstash:rw
      - ./logstash/logs:/var/logs/logstash:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:

Файл Elasticsearch Docker

FROM docker.elastic.co/elasticsearch/elasticsearch:7.6.2
/usr/share/elasticsearch/config/elasticsearch.yml
RUN mkdir -p /var/log/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/log/elasticsearch
RUN mkdir -p /var/lib/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
EXPOSE 9200
EXPOSE 9300

elasticsearch.yml

cluster.name: es_cluster
node.name: es_node_1
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["0.0.0.0"]
cluster.initial_master_nodes: ["es_node_1"]

Докерфайл Logstash

FROM docker.elastic.co/logstash/logstash:7.6.2
COPY logstash.yml /usr/share/logstash/config/logstash.yml
COPY ./pipeline/logstash.conf /usr/share/logstash/pipeline/logstash.conf
EXPOSE 9600

logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: "http://elasticsearch:9200"
xpack.monitoring.enabled: true

logstash.conf

input{
  stdin{}
}
output{
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
  }
}

Журналы контейнера logstash

container_logstash    | WARNING: An illegal reflective access operation has occurred
container_logstash    | WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long)
container_logstash    | WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
container_logstash    | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
container_logstash    | WARNING: All illegal access operations will be denied in a future release
container_logstash    | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
container_logstash    | [2020-04-25T14:50:33,271][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.2"}
container_logstash    | [2020-04-25T14:50:34,013][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
container_logstash    | [2020-04-25T14:50:34,127][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
container_logstash    | [2020-04-25T14:50:34,157][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
container_logstash    | [2020-04-25T14:50:34,160][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
container_logstash    | [2020-04-25T14:50:34,243][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
container_logstash    | [2020-04-25T14:50:34,244][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
container_logstash    | [2020-04-25T14:50:34,982][INFO ][org.reflections.Reflections] Reflections took 22 ms to scan 1 urls, producing 20 keys and 40 values 
container_logstash    | [2020-04-25T14:50:35,126][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
container_logstash    | [2020-04-25T14:50:35,134][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
container_logstash    | [2020-04-25T14:50:35,138][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
container_logstash    | [2020-04-25T14:50:35,138][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
container_logstash    | [2020-04-25T14:50:35,159][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
container_logstash    | [2020-04-25T14:50:35,182][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
container_logstash    | [2020-04-25T14:50:35,206][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
container_logstash    | [2020-04-25T14:50:35,213][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>750, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x27747a5a run>"}
container_logstash    | [2020-04-25T14:50:35,213][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
container_logstash    | [2020-04-25T14:50:35,711][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
container_logstash    | [2020-04-25T14:50:35,738][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
container_logstash    | [2020-04-25T14:50:36,233][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"ebdd88635541942b096027ed79be84efc3dd562a5f0e1b78fca83c7b5c9a1a7c", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_031a6e38-cafd-42f9-b689-b577ba9acc88", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
container_logstash    | [2020-04-25T14:50:36,246][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
container_logstash    | [2020-04-25T14:50:36,250][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
container_logstash    | [2020-04-25T14:50:36,253][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] ES Output version determined {:es_version=>7}
container_logstash    | [2020-04-25T14:50:36,253][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
container_logstash    | [2020-04-25T14:50:36,268][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
container_logstash    | [2020-04-25T14:50:36,271][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x6e9553e7 run>"}
container_logstash    | [2020-04-25T14:50:36,288][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
container_logstash    | [2020-04-25T14:50:36,294][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[:main]}
container_logstash    | [2020-04-25T14:50:36,398][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
container_logstash    | [2020-04-25T14:50:37,402][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
container_logstash    | [2020-04-25T14:50:38,337][INFO ][logstash.runner          ] Logstash shut down.

Пожалуйста, дайте мне знать, если вам нужно больше разъяснений или нужна дополнительная информация.

Спасибо за решение.


person Dipak    schedule 25.04.2020    source источник
comment
ты решил проблему?   -  person Konstantin Komissarov    schedule 22.06.2020
comment
Да, я решил это.   -  person Dipak    schedule 02.07.2020
comment
... тогда, пожалуйста, поместите это как ответ на ваш вопрос.   -  person bellackn    schedule 15.09.2020
comment
@bellackn Хорошо, дай мне сегодня, я опубликую ответ.   -  person Dipak    schedule 15.09.2020
comment
@Dipakchavda, пожалуйста, поделитесь своим решением, так как у меня такая же проблема, и я не могу понять, что не так. Спасибо!   -  person wobmene    schedule 19.12.2020


Ответы (1)


@wobmene @bellackn Извините за задержку с ответом на вопрос, который я задал давно.

Чтобы решить вышеуказанные проблемы, я перенастроил ELKB со следующей конфигурацией. Возможно, я не даю вам полного квалифицированного обоснования приведенного выше ответа, но я старался изо всех сил.

quinn — это имя, которое я использовал для этой сборки и сервисов.

Структура репозитория ELKB

elkb
    - elasticsearch
        Dockerfile
        elasticsearch.yml
    - filebeat
        Dockerfile
        filebeat.yml
    - kibana
        Dockerfile
        kibana.yml
    - logstash
        - pipeline
            logstash.conf
        Dockerfile
        logstash.yml
    docker-compose.yml

ELKB-порты

 - elasticsearch: 9200/9300  
 - logstash: 9600  
 - kibana: 5601  
 - filbeats: 5044

elkb/докер-compose.yml

# Docker version 19.03.5
# docker-compose version 1.25.3

version: "3.7"
services:
  elasticsearch:
    container_name: elasticsearch
    build:
      context: ./elasticsearch
      dockerfile: Dockerfile
    ports:
      - 9200:9200
      - 9300:9300
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data:rw
      - ./elasticsearch/logs:/usr/share/elasticsearch/logs:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb

  quinn_logstash:
    container_name: quinn_logstash
    build:
      context: ./logstash
      dockerfile: Dockerfile
    ports:
      - 9600:9600
      - 5000:5000/udp
      - 5000:5000/tcp
    volumes:
      - ./logstash/input-logs:/usr/share/logstash/logs
      - ./logstash/data:/var/lib/logstash:rw
      - ./logstash/logs:/var/logs/logstash:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

  quinn_kibana:
    container_name: quinn_kibana
    build:
      context: ./kibana
      dockerfile: Dockerfile
    ports:
      - 5601:5601
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

  quinn_filebeat:
    container_name: quinn_filebeat
    build:
      context: ./filebeat
      dockerfile: Dockerfile
    ports:
      - 5044:5044
    volumes:
      - ./../logs:/input-logs
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

networks:
  quinn_elkb:
    driver: bridge

volumes:
  elasticsearch:
    driver: local

elkb/elasticsearch/Dockerfile

FROM docker.elastic.co/elasticsearch/elasticsearch:7.6.2
COPY ./elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
RUN mkdir -p /var/log/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/log/elasticsearch
RUN mkdir -p /var/lib/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
EXPOSE 9200
EXPOSE 9300

elkb/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: quinn_es_cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: quinn_es_node_1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# ${path.data}
#
# Path to log files:
#
# ${path.logs}
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.seed_hosts: ["127.0.0.1", "[::1]", "0.0.0.0"]
discovery.seed_hosts: ["0.0.0.0"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["quinn_es_node_1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

elkb/filebeat/Dockerfile

FROM docker.elastic.co/beats/filebeat:7.6.2
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN mkdir -p /input-logs/
# RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN chmod go-w /usr/share/filebeat/filebeat.yml
USER filebeat
EXPOSE 5044

elkb/filebeat/filebeat.yml

filebeat.inputs:
  - type: log
    enabled: true
    paths:
      # here is the reference of docker directory.
      # The current directory of docker is /usr/share/filebeat
      - ../../../input-logs/**/*.log

processors:
  - add_docker_metadata: ~

reload.enabled: true
reload.period: 10s

output.logstash:
  hosts: ["quinn_logstash:5044"]

logging.json: true
logging.metrics.enabled: false

elkb/кибана/Dockerfile

FROM docker.elastic.co/kibana/kibana:7.6.2
COPY ./kibana.yml /usr/share/kibana/config/kibana.yml
EXPOSE 5601

elkb/кибана/kibana.yml

server.name: quinn_kibana
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
# elasticsearch.username: elastic
# elasticsearch.password: changeme

elkb/logstash/трубопровод/logstash.conf

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}

elkb/logstash/Dockerfile

FROM docker.elastic.co/logstash/logstash:7.6.2
COPY ./logstash.yml /usr/share/logstash/config/logstash.yml
COPY ./pipeline/logstash.conf /usr/share/logstash/pipeline/logstash.conf
EXPOSE 9600

elkb/logstash/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: "http://elasticsearch:9200"
xpack.monitoring.enabled: true
# xpack.monitoring.elasticsearch.username: elastic
# xpack.monitoring.elasticsearch.password: changeme

Я прочитал следующие статьи; у всех есть отличный справочник, который помогает решить вышеуказанные проблемы и настроить ELKB.

https://medium.com/@sece.cosmin/docker-logs-with-elastic-stack-elk-filebeat-50e2b20a27c6 https://github.com/cosminseceleanu/tutorials https://elk-docker.readthedocs.io/#prerequisites https://github.com/elastic/stack-docker/blob/master/docker-compose.yml https://github.com/elastic/elasticsearch/blob/master/distribution/docker/docker-compose.yml http://cambio.name/index.php/node/ 522

person Dipak    schedule 24.12.2020