Elasticsearch Single Node Cluster  - Docker

Elasticsearch Single Node Cluster - Docker

Setup elasticsearch for dev machine in no time

Preface

This post assumes that you have some basic understanding of Docker, Docker Compose, and the key components used in the Docker ecosystem.

  1. Install Docker

  2. install docker-compose

To get up to speed, with docker follow the Prepare Your Docker Environment section of Docker docs.

Elasticsearch Single Node Cluster

As we know there are multiple ways to set up the elastic search single node, we use the containerization technology. Create the docker-compose file using the below configuration, for running the single node elastic search container.

This is not recommended for production.

Here is the definition for Elasticsearch and Kibana

version: '3.6'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
    container_name: elasticsearch
    environment:
      - cluster.name="es-data-cluster"
      - node.name=es-node
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - network.host=0.0.0.0
      - transport.host=0.0.0.0
      - discovery.zen.minimum_master_nodes=1
      - xpack.license.self_generated.type=trial
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      # - xpack.security.enabled='false'
      # - xpack.monitoring.enabled='false'
      # - xpack.watcher.enabled='false'
      # - xpack.ml.enabled='false'
      # - http.cors.enabled='true'
      # - http.cors.allow-origin="*"
      # - http.cors.allow-methods=OPTIONS, HEAD, GET, POST, PUT, DELETE
      # - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type, Content-Length
      # - logger.level: debug
    ports: ['9200:9200']
    networks: ['stack']
    volumes:
      - 'es_data:/usr/share/elasticsearch/data'
    healthcheck:
      test: curl -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
      interval: 30s
      timeout: 10s
      retries: 5
  kibana:
    image: docker.elastic.co/kibana/kibana:7.15.0
    container_name: kibana
    environment:
      server.name: kibana
      server.host: "0"
      elasticsearch.hosts: "http://elasticsearch:9200"
    ports: ['5601:5601']
    depends_on: ['elasticsearch']
    networks: ['stack']
    healthcheck:
      test: curl -s https://localhost:5601 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
      interval: 30s
      timeout: 10s
      retries: 5
volumes:
  es_data:

networks: {stack: {}}
volumes:
  es_data:

Run

docker-compose up -d
  • Check the status of the containers running with docker-compose ps -a or docker container ls or docker ps -a
➜   docker-compose ps

NAME                COMMAND                  SERVICE             STATUS              PORTS
elasticsearch       "/bin/tini -- /usr/l…"   elasticsearch       running (healthy)   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp
kibana              "/bin/tini -- /usr/l…"   kibana              running (healthy)   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp
➜    docker container ls

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS                   PORTS                                                 NAMES
5181d70ca24e   4334c025a5e0   "/bin/tini -- /usr/l…"   9 minutes ago   Up 9 minutes (healthy)   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp             kibana
b621589570cf   53ecd52afaa0   "/bin/tini -- /usr/l…"   9 minutes ago   Up 9 minutes (healthy)   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp   elasticsearch
➜   docker ps -a

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS                   PORTS                                                 NAMES
5181d70ca24e   4334c025a5e0   "/bin/tini -- /usr/l…"   9 minutes ago   Up 9 minutes (healthy)   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp             kibana
b621589570cf   53ecd52afaa0   "/bin/tini -- /usr/l…"   9 minutes ago   Up 9 minutes (healthy)   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp   elasticsearch
  • Check the logs of the container with docker logs kibana and docker logs elasticsearch

Elasticsearch API

  • Cluster state API, Returns metadata about the state of the cluster
curl -XGET 'http://localhost:9200/_cluster/state?pretty'
{
  "cluster_name" : "\"es-data-cluster\"",
  "cluster_uuid" : "v0XNVN0rRBWGW-hwL8eLyA",
  "version" : 120,
  "state_uuid" : "fckMNcQjS-CbFKeNtCN5JQ",
  "master_node" : "Ai00Fu7NQEyoFEslcXehCQ",
  "blocks" : { },
  "nodes" : {
    "Ai00Fu7NQEyoFEslcXehCQ" : {
      "name" : "es-node",
      "ephemeral_id" : "DbPavH9jRGq050XA02nGLg",
      "transport_address" : "172.22.0.2:9300",
      "attributes" : {
        ........
      },
      "roles" : [
        ......
      ]
    }
  },
  "metadata" : {
      ........
  }
}
  • Cluster Health API
curl -XGET 'localhost:9200/_cluster/health?pretty'
{
  "cluster_name" : "\"es-data-cluster\"",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 10,
  "active_shards" : 10,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
  • Cluster or Node Stats
curl -XGET 'localhost:9200/_cluster/stats?human&pretty'

curl -XGET 'localhost:9200/_nodes/stats?pretty'

# A specific node stats:

curl -XGET 'localhost:9200/_nodes/node-1/stats?pretty'

# Index Level Stats:

curl -XGET 'localhost:9200/_nodes/stats/indices?pretty'

# Retrieve data on plugins or ingest:

curl -XGET ‘localhost:9200/_nodes/plugins