Monitoring Servers and Docker Containers using Elasticsearch with Grafana

Introduction

Infrastructure monitoring is the basis for application performance management. The underlying system’s availability and health must be maximized continually. To achieve this, one has to monitor the system metrics like CPU, memory, network, and disk. Response time lag, if any must be addressed swiftly. Here we’ll take a look at how to Monitor servers (and even Docker Containers running inside the Server) using Grafana, Elasticsearch, Metricbeat, and Skedler Reports.

Core Components

Grafana-Analytics & monitoring solution for database

Elasticsearch-Ingest and index logs

Metricbeat-Lightweight shipper for metrics

Skedler Reports –Automate actionable reports

Grafana — Analytics & monitoring solution for database

Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data-driven culture.

Elasticsearch-Ingest and index logs

Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.

Metricbeat — Lightweight shipper for metrics

Collect metrics from your systems and services. From CPU to memory, Redis to NGINX, and much more, Metricbeat is a lightweight way to send system and service statistics.

Skedler Reports — Automate actionable reports

Skedler offers the most powerful, flexible and easy-to-use data monitoring solution that companies use to exceed customer SLAs, achieve compliance, and empower internal IT and business leaders.

Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

Prerequisite,

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

ubuntu@guidanz:~$ mkdir monitoring

ubuntu@guidanz:~$ cd monitoring/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, Create a Docker Compose file for Elasticsearch, You also need to Create an Elasticsearch configuration file, elasticsearch.yml Docker Compose file for Elasticsearch is below,

Note: We will keep on extending the same docker file as we will move ahead to install other components.

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

volumes:

  esdata:

    driver: local

networks: guidanz

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

cluster.name: guidanz-stack-cluster

node.name: node-1

network.host: 0.0.0.0

path.data: /usr/share/elasticsearch/data

http.port: 9200

xpack.monitoring.enabled: true

http.cors.enabled: true

http.cors.allow-origin: “*”

http.max_header_size: 16kb

Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Access Elasticsearchusing the IP and Port and you will see the Elasticsearch UI.

http://ip_address:9200

Now We will setup the Metricbeat. It is one of the best components used along with the Elasticsearch to capture metrics from the server where the Elasticsearch is running. It Captures all hardware and kernel-related metrics like system-level CPU usage, memory, file system, disk IO, and network IO statistics, as well as top-like statistics for every process running on your systems.

To Install the Metricbeat, simply append the docker-compose.yml file, metricbeat.yml, and modules.d file as below.

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

Append the metricbeat.yml as below,

metricbeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.period: 5s

  reload.enabled: true

processors:

– add_docker_metadata: ~

monitoring.enabled: true

setup.ilm.enabled: false

output.elasticsearch:

  hosts: [“elasticsearch:9200”]

logging.to_files: false

setup:

  kibana.host: “kibana:5601”

  dashboards.enabled: true

The compose file consists of the volume mapping to the container, one is the metricbeat configuration and the second one (modules.d) is to Mount the modules.d directory into the container. This allows users to potentially make changes to the modules and they will be dynamically loaded. Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ mkdir modules.d

Append the system.yml as below inside the module.d folder,

– module: system

 metricsets:

   – core

   – cpu

   – load

   – diskio

   – filesystem

   – fsstat

   – memory

   – network

   – process

   – socket

 enabled: true

 period: 5s

 processes: [‘.*’]

 cpu_ticks: true

 process.cgroups.enabled: true

 process.include_top_n:

   enabled: true

   by_cpu: 20

   by_memory: 20

So now the Composite docker-compose file will look like below,

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

volumes:

  esdata:

    driver: local

networks: guidanz

You can Simply do compose up and down.

ubuntu@guidanz:~/monitoring$ docker-compose down 

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Now See the Targets in Elasticsearch, you will see metricbeat as well as a target.

Now eventually we will set up the grafana, where we will be using Elasticsearch as a data source. We can have a better Dashboard in grafana for the metrics visualization.

Append the code in the above docker compose and restart.

grafana:

 image: grafana/grafana

 user: “1000”

 environment:

   – GF_SECURITY_ADMIN_PASSWORD=secure_pass

 volumes:

   – ./grafana_db:/var/lib/grafana

 depends_on:

   – elasticsearch

 ports:

   – ‘3000:3000’

Access grafana UI from 3000 port, default user will be admin and the password you set in the compose file.

Now eventually we will set up the Skedler Reports, where we will be using Grafana as a data source. Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. Please review the documentation to install Skedler.

Now, Setup Skedler Reports, for this append the docker compose with the below code.

reports:

   image: skedler/reports:latest

   container_name: reports

   privileged: true

   cap_add:

     – SYS_ADMIN

   volumes:

     – /sys/fs/cgroup:/sys/fs/cgroup:ro

     – reportdata:/var/lib/skedler

     – ./reporting.yml:/opt/skedler/config/reporting.yml

   ports:

     – 3001:3001

Generate Report from Grafana in Minutes with Skedler. Fully featured free trial.

Monitoring Servers and Docker Containers using Prometheus with Grafana

Introduction

Infrastructure monitoring is the basis for application performance management. The underlying system’s availability and health must be maximised continually. To achieve this, one has to monitor the system metrics like CPU, memory, network, and disk. Response time lag, if any must be addressed swiftly. Here we’lltake a look at how to Monitor servers (and even Docker Containers running inside the Server) using Grafana, Prometheus, Node Exporter, CAdvisor and Skedler Reports.

Core Components

Grafana- Database analytics & monitoring solution

Prometheus- Event monitoring and alerting

Node-Exporter- Monitoring Linux host metrics

Wmi-Exporter- Monitoring Windows host metrics

CAdvisor- Monitoring metrics for the running Containers.

Skedler Reports –Automating actionable reports

Grafana - Database Analytics & monitoring solution 

Grafana equips users to query, visualize,  and monitor metrics, no matter where the underlying data is stored. With Grafana, one can also set alerts for metrics that require attention, apart from creating, exploring, and sharing dashboards with their team and fostering a data-driven culture.

Prometheus -Event monitoring and alerting

Prometheus is an open-source system monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company.

Node Exporter - Monitoring Linux host metrics

Node Exporter is a Prometheus exporter for hardware and OS metrics with pluggable metric collectors. It allows measuring various machine resources such as memory, disk, and CPU utilization

WMI Exporter -Monitoring Windows host metrics

Prometheus exporter for Windows machines, using the WMI (Windows Management Instrumentation).

CAdvisor - Monitoring metrics for the running Containers.

It Stands for Container Advisor and is used to aggregate and process all the metrics for the running Containers.

Skedler Reports — Automating actionable reports

Skedler offers the most powerful, flexible and easy-to-use data monitoring solution that companies use to exceed customer SLAs, achieve compliance, and empower internal IT and business leaders.

Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

Prerequisite,

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

So Let’s get started.

Login to your Linux machine and update the repository and Install Docker and Docker Compose. To Update the Repository,

Create a Directory, say monitoring

ubuntu@guidanz:~$ mkdir monitoring

ubuntu@guidanz:~$ cd monitoring/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, create a Docker Compose file for Prometheus, You also need to create a Prometheus configuration file, prometheus.yml Docker Compose file for Prometheus as below,

Note: We will keep on extending the same docker file as we move forward to install other components.

version: ‘3’

services:

 prometheus:

   image: prom/prometheus:latest

   container_name: prometheus

   volumes:

     – ./prometheus.yml:/etc/prometheus/prometheus.yml

     – ./prometheus_db:/var/lib/prometheus

     – ./prometheus_db:/prometheus

     – ./prometheus_db:/etc/prometheus

     – ./alert.rules:/etc/prometheus/alert.rules

   command:

     – ‘–config.file=/etc/prometheus/prometheus.yml’

     – ‘–web.route-prefix=/’

     – ‘–storage.tsdb.retention.time=200h’

     – ‘–web.enable-lifecycle’

   restart: unless-stopped

   ports:

     – ‘9090:9090’

   networks:

     – monitor-net

Create a Prometheus configuration file and paste the config as below.

global:

 scrape_interval: 5s

 external_labels:

   monitor: ‘guidanz-monitor’

scrape_configs:

 – job_name: ‘prometheus’

   static_configs:

     – targets: [‘monitoring.guidanz.com:9090’] ## IP Address of the localhost

The compose file consists of two volume mappings to the container. One is the Prometheus configuration and the other one (prometheus_db) is to store the Prometheus database locally. Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ mkdir prometheus_db

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Access Prometheus using the IP and Port and you will see the Prometheus UI, http://ip_address:9090

Now we will setup the Node Exporter. It is one of the best components used along with the Prometheus to capture metrics from the server where the Prometheus is running. It Captures all hardware and kernel-related metrics like CPU, Memory, Disk, Disk Read/Write, etc.

To Install the Node exporter, simply append the docker-compose.ymlfile and prometheous.yml file as below.

node-exporter:

 image: prom/node-exporter

 ports:

   – ‘9100:9100’

Append the prometheus.yml as below,

– job_name: ‘node-exporter’

 static_configs:

   – targets: [‘monitoring.guidanz.com:9100’]

So now the Composite docker-compose file will looks like below,

version: ‘3’

services:

 prometheus:

   image: prom/prometheus:latest

   volumes:

     – ./prometheus.yml:/etc/prometheus/prometheus.yml

     – ./prometheus_db:/var/lib/prometheus

   command:

     – ‘–config.file=/etc/prometheus/prometheus.yml’

   ports:

     – ‘9090:9090’

 node-exporter:

 image: prom/node-exporter

 ports:

   – ‘9100:9100’

And prometheus.yml will look like below,

global:

 scrape_interval: 5s

 external_labels:

   monitor: ‘guidanz-monitor’

scrape_configs:

 – job_name: ‘prometheus’

   static_configs:

     – targets: [‘monitoring.guidanz.com:9090’]

 – job_name: ‘node-exporter’

   static_configs:

     – targets: [‘monitoring.guidanz.com:9100’]

Now Create the Node Exporter Container and Restart the Prometheus container using the below commands:

ubuntu@guidanz:~/monitoring$ docker-compose start node-exporter

ubuntu@guidanz:~/monitoring$ docker-compose restart prometheus

Or, One can simply do compose up and down.

ubuntu@guidanz:~/monitoring$ docker-compose down

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Take a look at the targets in Prometheus. You will notice node exporter as well as a target.

Now, setup CAdvisor, for this append the docker compose with below code.

cadvisor:

 image: google/cadvisor:latest

 ports:

   – ‘8080:8080’

 volumes:

   – /:/rootfs:ro

   – /var/run:/var/run:rw

   – /sys:/sys:ro

   – /var/lib/docker/:/var/lib/docker:ro

Also, append the prometheus.yml with a bit code yml code. We are actually adding the CAdvisor service in Prometheus configuration.

– job_name: ‘cAdvisor’

 static_configs:

   – targets: [‘monitoring.guidanz.com:8080’]

Access CAdvisor from the URL, http://IP_Address:8080/docker/

Now eventually we will set up the grafana, where we will be using Prometheus as a data source. We can have a better Dashboard in grafana for the metrics visualization.

Append the code in the above docker compose and restart.

grafana:

 image: grafana/grafana

 user: “1000”

 environment:

   – GF_SECURITY_ADMIN_PASSWORD=secure_pass

 volumes:

   – ./grafana_db:/var/lib/grafana

 depends_on:

   – prometheus

 ports:

   – ‘3000:3000’

Access grafana UI from 3000 port, default user will be admin and the password you set in the compose file.

Now eventually we will set up the Skedler Reports, where we will be using Grafana as a data source. Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. Please review the documentation to install Skedler Reports.

Now, Setup Skedler Reports, for this append the docker compose with the below code.

reports:

   image: skedler/reports:latest

   container_name: reports

   privileged: true

   cap_add:

     – SYS_ADMIN

   volumes:

     – /sys/fs/cgroup:/sys/fs/cgroup:ro

     – reportdata:/var/lib/skedler

     – ./reporting.yml:/opt/skedler/config/reporting.yml

   ports:

     – 3001:3001

Schedule and Automate Your Grafana Reports Free with Skedler. Fully featured free trial.

The Best Tools for Exporting data from Grafana

As a tool for visualizing data for time series databases, Logging & document databases, SQL, and Cloud, Grafana is a perfect choice. Its UI interface allows creating a dashboard and visualizations in minutes and analyzing the data with its help.

Despite having tons of visualizations, the open-source version of Grafana does not have advanced reporting capability. Automating the export of data into CSV, Excel, or PDF requires additional plugins.

We wrote an honest and unbiased review of the following tools that are available for exporting data directly from Grafana.

  1. Grafana reporter
  2. Grafana Data Exporter
  3. Skedler Reports

1. Grafana Reporter

https://github.com/IzakMarais/reporter

A simple Http service that generates *.PDF reports from Grafana dashboards.

Runtime requirements

  • pdflatex installed and available in PATH.
  • a running Grafana instance that it can connect to. If you are using an old Grafana (version < v5.0), see Deprecated Endpoint below.

Build requirements:

  • Golang

Pros of Grafana Reporter

  • Simply embeddable tool for Grafana
  • Uses simple curl commands and arguments

Cons of Grafana Reporter

  • You need pdflatex and Golang. So you must install a Golang environment in your system.
  • For non-technical users, it’s difficult to use
Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

#1 GRAFANA DATA EXPORTING TOOL

Start automating free with Skedler today!

CLAIM YOUR FREE TRIAL

No credit card required

 

2. Grafana Data Exporter

https://github.com/CorpGlory/grafana-data-exporter

Server for fetching data from Grafana data sources. You would you use it:

  • To export your metrics on a big range
  • To migrate from one data source to another

Runtime requirements

  • Linux.
  • Docker

Installation

  • grafana-data-exporter for fetching data from Grafana data sources.
  • Simple-JSON-data source for progress tracking.
  • grafana-data-exporter-panel for exporting metrics from the dashboard.
  • Import exporting template dashboard at http://<YOUR_GRAFANA_URL>/dashboard/import.

Pros of Grafana Data Exporter

  • Faster writing of documents
  • Added as a Grafana panel

Cons of Grafana Data Exporter

  • To automate the exporting of data on a periodic basis, you need to write your own cron job
  • Grafana Data Exporter installation is a bit tricky for non-technical users

3. Skedler Reports

https://www.skedler.com/

Disclosure: Skedler Reports is one of our products.

Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. There is also a plugin for Kibana that is easy to install and use with the Elasticsearch data. It’s called Skedler Reports as Kibana Plugin.

Pros of Skedler Reports

  • Simple to install, configure, and use
  • Send HTML, PDF, XLS, CSV reports on-demand or periodically via email or #slack
  • Report setup takes less than 5 minute
  • Easy to use, no coding required

Cons of Skedler Reports

  • It requires a paid license which includes software and also enterprise support
  • Installation is difficult for users who are not fully familiar with Elastic Stack or Grafana
Schedule & Automate Your Grafana Reports Free with Skedler. Fully featured free trial.

What tools do you use? 

Do you have to regularly export data from Grafana for external analysis or reporting purposes?  Do you use any other third-party tools? Email us about the tool at hello at skedler.com.

Skedler Update: Version 3.7 Released

Skedler v3.7 Updates

We have some exciting news for you, Skedler v3.7 is now available with new features.

What’s New in Skedler Reports v3.7

  • Support for Elasticsearch version from 5.x to 6.3.x and Kibana version from 5.x to 6.3.x
  • Support for Search Guard from 5.0.x to 6.2.x
  • Retain the same order of the visualizations in reports as it is in Kibana/Grafana dashboard
  • REST API support
  • Ability to test email/Slack with the configured email/Slack settings

What’s New in Skedler Alerts v3.7

  • Elastic 6.3 Support

Download the latest version of Skedler from the Free Trial page: Download Skedler

For technical help, visit our Support Page for more information: Skedler Support 

Copyright © 2023 Guidanz Inc
Translate »