Installing, configuring Skedler Reports as Kibana Plugin with Elasticsearch and Kibana Environment using Docker Compose

Introduction

If you are using ELK stack, you can now install Skedler as a Kibana plugin. Skedler Reports plugin is available for Kibana versions from 6.5.x to 7.6.x.

Let’s take a look at the steps to Install Skedler Reports as a Kibana plugin.

Prerequisites:

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

Let’s get started!

Login to your Linux machine and update the repository and install Docker and Docker Compose. Then follow the below steps to update the Repository:

Setting Up Skedler Reports

Create a Directory, say skedlerplugin

ubuntu@guidanz:~$ mkdir skedlerplugin

ubuntu@guidanz:~$ cd skedlerplugin/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, create a Docker Compose file for Skedler Reports. You also need to create a Skedler Reports configuration file, reporting.yml, and a Docker Compose file for Skedler as below,

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

volumes:

  reportdata:

    driver: local

networks: {stack: {}}

Create an Elasticsearch configuration file – reporting.yml and paste the config as below.

ubuntu@guidanz:~$ mkdir skedlerplugin

ubuntu@guidanz:~$ cd skedlerplugin/

ubuntu@guidanz:~$ vim reporting.yml

Download the reporting.yml file found here

Setting Up Elasticsearch

You also need to create an Elasticsearch configuration file, elasticsearch.yml. Docker Compose file for Elasticsearch is below,

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.6.0”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

volumes:

  esdata:

    driver: local

networks: guidanz

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

cluster.name: guidanz-stack-cluster

node.name: node-1

network.host: 0.0.0.0

path.data: /usr/share/elasticsearch/data

http.port: 9200

xpack.monitoring.enabled: true

http.cors.enabled: true

http.cors.allow-origin: “*”

http.max_header_size: 16kb

Setting Up Skedler Reports as Kibana Plugin

Create a Directory inside skedlerplugin, say kibanaconfig

ubuntu@guidanz:~$ mkdir kibanaconfig

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim Dockerfile

Now, create a Docker file for Kibana and check the Docker file for Kibana as below,

FROM docker.elastic.co/kibana/kibana:7.6.0

RUN ./bin/kibana-plugin install https://www.skedler.com/plugins/skedler-reports-plugin/4.10.0/skedler-reports-kibana-plugin-7.6.0-4.10.0.zip

Then, copy the URL of the Skedler Reports plugin matching your exact Kibana version from here.

You also need to create a Docker Compose file for Kibana is below,

#Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    build:

      context: ./kibanaconfig

      dockerfile: Dockerfile

    image: kibanaconfig

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./kibanaconfig/kibana.yml:/usr/share/kibana/config/kibana.yml

      – ./kibanaconfig/skedler_reports.yml:/usr/share/kibana/plugins/skedler/config/skedler_reports.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

Create a Kibana configuration file kibana.yml inside the kibanaconfig folder and paste the config as below.

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim kibana.yml

server.port: 127.0.0.1:5601

elasticsearch.url: “http://elasticsearch:9200”

server.name: “full-stack-example”

xpack.monitoring.enabled: true

Create a Skedler Reports as Kibana Plugin configuration file skedler_reports.yml inside the kibanaconfig folder and paste the config as below.

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim skedler_reports.yml

#/*********** Skedler Access URL *************************/

skedler_reports_url: “http://ip_address:3000”

#/*********************** Basic Authentication *********************/

# If Skedler Reports uses any username and password

#skedler_username: user

#skedler_password: password

Configure the Skedler Reports server URL in the skedler_reports_url variable. By default, the variable is set as shown below,

If the Skedler Reports server URL requires basic authentication, for example, Nginx, uncomment and configure the skedler_username and skedler_password with the basic authentication credentials as shown below: Now run the docker-compose.

ubuntu@guidanz:~/skedlerplugin$ docker-compose up -d

Access Skedler Reports the IP and Port and you will see the Skedler Reports UI.

| http://ip_address:3000

Access Elasticsearch the IP and Port and you will see the Elasticsearch UI.

| http://ip_address:9200

Access Kibana using the IP and Port and you will see the Kibana UI.

| http://ip_address:5601

So now the Composite docker-compose file will look like below,

You can Simply do compose up and down.

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

#  Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.1.1”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: ${ES_MEM_LIMIT}

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

 #Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    build:

      context: ./kibanaconfig

      dockerfile: Dockerfile

    image: kibanaconfig

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./kibanaconfig/kibana.yml:/usr/share/kibana/config/kibana.yml

      – ./kibanaconfig/skedler_reports.yml:/usr/share/kibana/plugins/skedler/config/skedler_reports.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

volumes:

  esdata:

    driver: local

  reportdata:

    driver: local

networks: {stack: {}}

You can Simply do compose up and down.

ubuntu@guidanz:~/skedlerplugin$ docker-compose down 

ubuntu@guidanz:~/skedlerplugin$ docker-compose up -d

Summary

Docker compose is a useful tool to manage container stacks for your client. And manage all related containers with one single command.

Monitoring Servers and Docker Containers using Elasticsearch with Grafana

Introduction

Infrastructure monitoring is the basis for application performance management. The underlying system’s availability and health must be maximized continually. To achieve this, one has to monitor the system metrics like CPU, memory, network, and disk. Response time lag, if any must be addressed swiftly. Here we’ll take a look at how to Monitor servers (and even Docker Containers running inside the Server) using Grafana, Elasticsearch, Metricbeat, and Skedler Reports.

Core Components

Grafana-Analytics & monitoring solution for database

Elasticsearch-Ingest and index logs

Metricbeat-Lightweight shipper for metrics

Skedler Reports –Automate actionable reports

Grafana — Analytics & monitoring solution for database

Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data-driven culture.

Elasticsearch-Ingest and index logs

Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.

Metricbeat — Lightweight shipper for metrics

Collect metrics from your systems and services. From CPU to memory, Redis to NGINX, and much more, Metricbeat is a lightweight way to send system and service statistics.

Skedler Reports — Automate actionable reports

Skedler offers the most powerful, flexible and easy-to-use data monitoring solution that companies use to exceed customer SLAs, achieve compliance, and empower internal IT and business leaders.

Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

Prerequisite,

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

ubuntu@guidanz:~$ mkdir monitoring

ubuntu@guidanz:~$ cd monitoring/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, Create a Docker Compose file for Elasticsearch, You also need to Create an Elasticsearch configuration file, elasticsearch.yml Docker Compose file for Elasticsearch is below,

Note: We will keep on extending the same docker file as we will move ahead to install other components.

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

volumes:

  esdata:

    driver: local

networks: guidanz

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

cluster.name: guidanz-stack-cluster

node.name: node-1

network.host: 0.0.0.0

path.data: /usr/share/elasticsearch/data

http.port: 9200

xpack.monitoring.enabled: true

http.cors.enabled: true

http.cors.allow-origin: “*”

http.max_header_size: 16kb

Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Access Elasticsearchusing the IP and Port and you will see the Elasticsearch UI.

http://ip_address:9200

Now We will setup the Metricbeat. It is one of the best components used along with the Elasticsearch to capture metrics from the server where the Elasticsearch is running. It Captures all hardware and kernel-related metrics like system-level CPU usage, memory, file system, disk IO, and network IO statistics, as well as top-like statistics for every process running on your systems.

To Install the Metricbeat, simply append the docker-compose.yml file, metricbeat.yml, and modules.d file as below.

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

Append the metricbeat.yml as below,

metricbeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.period: 5s

  reload.enabled: true

processors:

– add_docker_metadata: ~

monitoring.enabled: true

setup.ilm.enabled: false

output.elasticsearch:

  hosts: [“elasticsearch:9200”]

logging.to_files: false

setup:

  kibana.host: “kibana:5601”

  dashboards.enabled: true

The compose file consists of the volume mapping to the container, one is the metricbeat configuration and the second one (modules.d) is to Mount the modules.d directory into the container. This allows users to potentially make changes to the modules and they will be dynamically loaded. Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ mkdir modules.d

Append the system.yml as below inside the module.d folder,

– module: system

 metricsets:

   – core

   – cpu

   – load

   – diskio

   – filesystem

   – fsstat

   – memory

   – network

   – process

   – socket

 enabled: true

 period: 5s

 processes: [‘.*’]

 cpu_ticks: true

 process.cgroups.enabled: true

 process.include_top_n:

   enabled: true

   by_cpu: 20

   by_memory: 20

So now the Composite docker-compose file will look like below,

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

volumes:

  esdata:

    driver: local

networks: guidanz

You can Simply do compose up and down.

ubuntu@guidanz:~/monitoring$ docker-compose down 

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Now See the Targets in Elasticsearch, you will see metricbeat as well as a target.

Now eventually we will set up the grafana, where we will be using Elasticsearch as a data source. We can have a better Dashboard in grafana for the metrics visualization.

Append the code in the above docker compose and restart.

grafana:

 image: grafana/grafana

 user: “1000”

 environment:

   – GF_SECURITY_ADMIN_PASSWORD=secure_pass

 volumes:

   – ./grafana_db:/var/lib/grafana

 depends_on:

   – elasticsearch

 ports:

   – ‘3000:3000’

Access grafana UI from 3000 port, default user will be admin and the password you set in the compose file.

Now eventually we will set up the Skedler Reports, where we will be using Grafana as a data source. Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. Please review the documentation to install Skedler.

Now, Setup Skedler Reports, for this append the docker compose with the below code.

reports:

   image: skedler/reports:latest

   container_name: reports

   privileged: true

   cap_add:

     – SYS_ADMIN

   volumes:

     – /sys/fs/cgroup:/sys/fs/cgroup:ro

     – reportdata:/var/lib/skedler

     – ./reporting.yml:/opt/skedler/config/reporting.yml

   ports:

     – 3001:3001

Generate Report from Grafana in Minutes with Skedler. Fully featured free trial.

Simplifying Skedler Reports with Elasticsearch and Kibana Environment using Docker Compose

Docker compose is a tool for defining and running multi-container (Skedler Reports, Elasticsearch and Kibana) Docker applications.  With Compose, you use a YAML file to configure your application’s services. Then with a single command, you create and start all the services from your configuration.

In this section, I will describe how to create a containerized installation for Skedler Reports, Elasticsearch and Kibana.

Benefits:

  • You describe the multi-container set up in a clear way and bring up the containers in a single command.
  • You can define the priority and dependency of the container on other containers.

Step-by-Step Instruction:

Step 1: Define services in a Compose file:

Create a file called docker-compose.yml in your project directory and paste the following

docker-compose.yml:

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.1.1”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: ${ES_MEM_LIMIT}

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

 #Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    image: “docker.elastic.co/kibana/kibana:7.1.1”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./config/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

volumes:

  esdata:

    driver: local

  reportdata:

    driver: local

networks: {stack: {}}

This Compose file defines three services, Skedler Reports, Elasticsearch and Kibana.

Step 2: Basic configurations using reporting.yml and kibana.yml

Create a files called reporting.yml in your project directory.

Getting the reporting.yml file found here

Note: For more configuration options kindly refer the article reporting.yml and ReportEngineOptions Configuration

Create a files called kibana.yml in your project directory.

Note: For more configuration options kindly refer the article kibana.yml

Step 3: Build and run your app with docker-compose

From your project directory, start up your application by running

sudo docker-compose up -d

Compose pulls a Skedler Reports, Elasticsearch and Kibana images, builds an image for your code, and starts the services you defined

Skedler Reports is available at http://<hostIP>:3000,  Elasticsearch is available at http://<hostIP>:9200 and Kibana is available at http://<hostIP>:5601 .

Summary

Docker compose is a useful tool to manage container stacks for your client. And manage all related containers with one single command.

Copyright © 2023 Guidanz Inc
Translate »