The Ultimate Guide to Automate Daily Grafana Reports

Businesses not only use Grafana to monitor traditional  IT infrastructure data but also report much-needed operational data that various business teams use to get information on the day-to-day health of the business. Grafana’s dashboards help deliver relevant insights about business’ data in a beautiful, easy to read format.

With Grafana reporting, you can organize metrics and trends from dashboards and share them with your stakeholders or with your internal teams, who need them for making data-driven operational decisions in their everyday work.

Different teams use different names for metrics, so it is often necessary to handcraft the metrics for each project. So there are numerous steps involved if you want to see all of your metrics in one place. Creating multiple reports and sharing them regularly with your team/stakeholder is a tedious and time-consuming process. 

There are better things to do than spending hours in front of the computer going through multiple data sources, looking up relevant data, combining it, updating, and then distributing that report to stakeholders. The major reason that companies choose to automate reports is to save time. 

There are mainly 2 options when it comes to automating daily Grafana reporting – 

In this article, we will deep dive into both of these options and go through their pros and cons.

All About Grafana Reports

Can Grafana generate reports? 

Yes, but not in the open-source version. You can only generate reports if you have Grafana Cloud Pro or Advanced or in Grafana Enterprise versions. 

How do I export a Grafana report? How often is a report generated in Grafana? 

Grafana allows you to generate PDF reports from your dashboards and email them to stakeholders on a schedule. Scheduled reports can be sent once on an hourly, daily, weekly, or monthly basis, or even at custom intervals. you can configure company-wide report settings in the Settings tab on the reporting page. Grafana also lets you customise your reports

How do I get a Grafana Report?

3 major components of grafana will help you implement and understand metrics: panels, dashboards, and reports.

The panel is the first and most important component in Grafana that represents data visualization. With a panel, you can create a graph/plot that visualizes the given metric or several metrics.

The dashboard is just a collection of different panels. Having a collection of different panels on a single dashboard will help you analyze related metrics together and understand how the business is doing.

You can then generate reports of these dashboards and share them with your stakeholders in a pdf format. This facility is available only in Grafana Cloud Pro and Advanced and in Grafana Enterprise.

You can configure template variables for the dashboard on the report page for each report. But this is only available in Grafana Enterprise version 7.5+. You can also include dynamic dashboards with panels or rows, set to repeat by a variable, into reports which are only available from Grafana Enterprise version 8.0+.

You can also attach a CSV file along with the PDF report email for the selected dashboard but this feature is available on Grafana Enterprise version 8.0+ provided you have installed an image rendering plugin.

Prerequisites

  • To Send reports using Grafana, you must first configure your SMTP
  • You must also install the image rendering plugin
  • As a default, reports use the time range of the dashboard. You can change the time range of the report by saving a modified time range to the dashboard or by setting a time range via the Time range field in the report form.

If you want to skip the Grafana Enterprise features and go straight to Skedler reports, click here

Can Grafana send Email Reports?

Only available in Grafana Enterprise version 7.2+, you can send your customised email reports to all your stakeholders. You can have your company logo displayed on the report pdf. You can also brand your reports email with your company logo in the header and add custom URL links to the header and footer.

  Limitations of Grafana Reporting

While Grafana Enterprise offers some great benefits, it has a few limitations:

  • The report layouts are limited to 4 different types – 1. Simple Portrait, Simple Landscape, Grid Portrait and Grid Landscape.   
  • Notification is limited to email SMTP channel
  • Limited report settings – The customisation options for your reports is pretty basic
  • Grafana variables are supported in Enterprise version 7.5+. It will allow you to override the dashboard variables with custom values.  
  • It does not support burst reports i.e you can’t generate multiple personalized reports for recipients from a single dashboard
  • Scheduling capability is basic. You can’t schedule reports only on workdays or yearly reports etc.
  • Error handling is missing. If a report fails to generate for some reason, there is no notification or error message to understand the root cause.   
  • Last, but not least, the cost of a Grafana Enterprise Stack license to get reporting is at least $3,500 per month which is quite steep.

Check out these 3 best Grafana reporting tools here

Create, Generate, and Schedule a Grafana Dashboard Report using Skedler

Skedler helps you automate the process of generating professional quality reports and distribution to your stakeholders. You can use Skedler to generate multiple reports, customise each report with all the metrics you want to monitor and track and schedule to send these reports to stakeholders/team as per your requirement.

Let’s start from the beginning. You can download Skedler from here.  Skedler is easy to install and configure in VM or containers.  You can find the step by step instructions to install Skedler here.  

Next, we will take you through the process of configuring and generating reports with Skedler. The Grafana dashboard forms the basis of the scheduled report. Skedler will automatically discover all the existing Grafana dashboards for you. You just need to select one dashboard for your report.

  • Fill in your Grafana URL and authentication data to connect Skedler to your Grafana.
Select Datasource
  • Create report by clicking the top right button in the main dashboard and select Visual Report to create PDF reports
Create Visual Reports
  • Drag and drop a chart, the size that the chart/graph pops up in what we call “True Size”, the size of the chart in Grafana dashboard. You can easily move and resize them as well. 
Create Layouts
  • Schedule your report

You can schedule your reports to get generated daily, weekly or monthly or even for a custom time frame as per your requirement. You can also choose from a variety of report formats right from PDF, CSV to PNG or HTML

Schedule Reports
  • The next step is to distribute your reports to your stakeholders. Choose if you want to create an email notification channel or Slack notification channel.
Share Reports

Fill in the respective information for the email channel or slack channel, and click save on the bottom right corner. You can customize who receives the report, subject, and message that accompanies the report.

The Advantages of Skedler Reports

  • Skedler is quick to install and configure
  • It works with all versions of Grafana
  • Skedler supports all formats like PDF, Excel/CSV/HTML and even PNG reports
  • Layouts can be customized with rich templates to drive the value of data.
  • You can add your branding to all the reports you generate
  • You can send personalized reports to different recipients from a single dashboard
  • Flexibility in scheduling and distribution
  • Multiple deployment options such as docker are supported
  • It works with Grafana On-Prem and Grafana Cloud as well
  • It has Role-based access control (RBAC), so you can monitor who has access to what
  • Robust error handling and notification mechanisms
  • Subscription starts at $149 per month for 250 reports when paid annually. Also available at 500/1000/Unlimited report options. Allows you to start small and pay as you grow.

To know more about Grafana reporting using Skedler, check this reporting guide

Conclusion

In this article, we have demonstrated how Grafana could be used for visualizing and tracking your organisation metrics. Grafana allows you to generate PDF reports from your dashboards and email them to stakeholders on a schedule.  Grafana also lets you customise your reports. And although Grafana does reporting, it supports it only in enterprise versions and it is not very affordable.

Skedler on the other hand along with the flexibility in its reporting and many features like burst reporting, different distribution channels and error handling capabilities will help you create customized reports within a few clicks. It is easy, efficient and will not burn a hole in your pocket!

Try our new and improved Skedler for custom generated Grafana reports for free!

Download Skedler

The 3 Best Grafana Reporting Tools Available in 2021

Grafana Reporting Breaks Down Information Silo

Information silos hurt collaboration and create inefficiencies within organizations.  While Grafana dashboards provide real-time information to users such as analysts and engineers, it creates a barrier to non-technical users, operations teams, and stakeholders by requiring them to log in, access, or sit in front of terminals.  To truly democratize the information captured in your Grafana platform, you need to break down the information silo created by Grafana dashboards.

Grafana Tools

Grafana Reporting is your hammer for breaking down the information silos created by Grafana.  With Grafana Reporting, you can unlock data that is locked up in dashboards and make it available to a larger audience including operations teams, field personnel, stakeholders, and customers who do not have access to the Grafana platform or might not be inclined to sit in front of Grafana dashboards.  With Grafana Reporting, you can pull key metrics and trends out of dashboards and distribute to stakeholders who need them for making data-driven operational decisions in their everyday work.

Let’s dive a little deeper to learn more about Grafana Reporting and the 3 best Grafana Reporting tools.

Table of Content

  • What is Grafana Reporting?
  • What is the purpose of Grafana Reporting?
  • What makes an effective Grafana Reporting Tool?
  • Can Grafana generate reports?
  • Are there any open source or free tools for Grafana Reporting?
  • What are the typical costs of Grafana Reporting Tools?
  • A deep dive into the 3 Best Grafana Reporting Tools available in 2021

What is Grafana Reporting?

Grafana Reporting is the process of creating and automating the generation and distribution of PDF, XLS, CSV, HTML Reports from Grafana dashboards.   Reports are created by reusing the existing visualizations and data queries in Grafana dashboards without having to recreate them from scratch.  Reusing Grafana visualizations saves time and reduces the effort to create reports.  You can schedule report generation at a needed frequency such as daily/weekly/monthly.  You can automate the distribution of reports to stakeholders via notification channels such as email or slack.

what is Grafana Reporting

Grafana Reporting is an excellent type of Information Radiator, especially for remote working teams and customers.  Similar to a Big Visible Chart that is used in office settings, Grafana Reporting can be used to radiate information to distributed team members via email, slack, etc. Grafana Reporting increases collaboration, transparency, and accountability while enhancing efficiency and visibility to operational metrics and trends.

What is the Purpose of Grafana Reporting?

Grafana Reporting is typically implemented by organizations that have set up dashboards in Grafana and is now looking to distribute the dashboard information to users who are managers, customers or operations teams who do not have access to the dashboards or are often too busy to be sitting in front of dashboards.

purpose of Grafana Reporting

Grafana Reporting has become a vital tool since a vast majority of the users in any organization do not have continuous access to Grafana dashboards.  It is therefore used by businesses of all sizes to distribute Grafana information to stakeholders both internally within their companies and externally to their clients.  By delivering the right information at the right time to the right users, Grafana Reporting helps users(recipients) to make informed decisions and better manage their business operations.

Some popular use cases for the use of Grafana Reporting are:

  • Infrastructure Operations: Reports on infrastructure availability and performance monitoring metrics are automatically generated and distributed to operations teams and managers.
  • Network Operations: Companies such as Enghouse use Grafana Reporting to distribute daily and weekly hotspot reports to field service engineers and managers so that they can prioritize and resolve the high-value network issues.
  • Factory Production Operations: Many companies in industries such as steel, lumber, semiconductor use Grafana Reporting for operational monitoring and reporting. For example, BidGroup, a leader in the lumber industry, creates daily and weekly factory production scheduling reports for its production managers.

What makes an Effective Grafana Reporting Tool?

A multitude of features does not necessarily make a great tool, but an effective Grafana Reporting tool must address a set of core requirements.  We have compiled these requirements based on years of our team’s experience working with Grafana users.

Effective Grafana Reporting Tool

The core requirements for a Grafana Reporting Tool are:

Functional Requirements

  • PDF Reports: Ability to export dashboards into PDF reports and automatically distribute them to users via notification channels such as email/slack.
  • Custom Layouts: Customize PDF report layouts so that users can easily see and understand data
  • Templates: For customers and key stakeholders, branded report templates are critical to drive value of your data and service.  
  • Excel/CSV Reports: For data analysts and Microsoft Excel/Tableau/Power BI users, ability to export dashboards to CSV/Excel formats and automatically distribute them via notification channels
  • Grafana Variables: Ability to use Grafana variables to generate/burst reports from a single Grafana dashboard is useful for sending personalized reports to various teams.
  • Flexibility: The tool should provide the ability to organize data in a layout that is easily understood by the recipients.  In addition, you need the ability to schedule/automate distribution across multiple notification channels in a flexible schedule such as 1st day of the month, every Monday etc.
  • SLA & Error Notifications: If you are sending reports to your managers or customers, it is important to maintain service level agreements(SLA) and be informed if something goes wrong with a specific report.

Technical Requirements

  • Scalability:  The Grafana Reporting Tool must be robust enough to generate reports from large Grafana dashboards with several visualizations.  It should not choke due to the volume and the heaviness of the dashboard.
  • Support & Upgrades: The tool should be maintained, enhanced and continuously supported so that it can keep pace with the Grafana updates.  Teams should be able to upgrade to the latest version of Grafana as needed and continue using Reporting.
  • Performance: Ability to export data quickly and efficiently.

In addition to the above requirements, there are a few other factors. Cost/budget is often a critical factor in choosing the Grafana Reporting tool that is appropriate for your needs.  Organizations that use multiple dashboard tools such as Kibana or multiple instances of Grafana/Kibana might also need a unified reporting tool.   Some users might also prefer to generate PDF reports from within the Grafana dashboard, however, it is often not a critical requirement since the recipients of Grafana reports are often not the users of Grafana dashboards.

Now that we have discussed the key requirements for a Grafana Reporting tool, the next question is:

Can Grafana Generate Reports?

The answer is No. Grafana is a dashboard tool and does not generate reports.  The open-source Grafana doesn’t include reporting capability.  

Grafana Generate Reports

Only the proprietary version, Grafana Enterprise Stack, which costs a minimum of $3,500 /mo has basic reporting capabilities.  Now let’s look at what are your options for Grafana Reporting.

The 3 Best Tools for Grafana Reporting in 2021

If you are an operations engineer/manager who has set up a Grafana instance to monitor key operational metrics, chances are high that within a few days your users will badger your team for reports to be delivered to their email inbox or slack.  How do you address their needs?

Luckily, you have three(3) choices to get them off your back!

  • Grafana Enterprise Stack
  • Skedler Reports
  • Reporter

Before we dive deeper into these three tools, let’s address the $1,188 question on your mind.

Are there any free or opensource Grafana Reporting tools?

Unfortunately there are no actively maintained free or opensource Grafana Reporting tools.  The main issue is that any open source reporting project needs to keep pace with the rapid and frequent updates to Grafana.  When you upgrade to the latest version of Grafana, your reporting setup will break down if the tool fails to keep pace with the new releases of Grafana.

best Grafana Reporting Tool

A survey of the available tools showed that the latest update to the Reporter, which is an open source tool, was in November 2019(16 months ago at the time of this publication) when the Grafana 6.5 was released.  Since then, there have been 7 new releases of Grafana with the current version being 7.4. Therefore, using an open source tool is not a viable option if you need reliable reporting and want to keep pace with the latest capabilities of Grafana.

What are the typical costs of Grafana Reporting Tools?

The typical cost of Grafana Reporting Tool ranges from $99/mo to $3,500/mo.  Obviously it’s a very wide range, so let’s peel the layers a bit.  There are two commercially available solutions for Grafana Reporting: Grafana Enterprise Edition and Skedler Reports.  Grafana Enterprise Stack starts at $3,500 per month and includes other features beyond just reporting.  Skedler Reporting starts at $99 per month and is a pure-play enterprise reporting automation tool.

A Deep Dive into the 3 Grafana Reporting Tools

Grafana Enterprise Stack

Grafana Enterprise Stack is a proprietary offering from Grafana that includes a number of enterprise plugins, collaboration features, reporting, enhanced LDAP, enterprise support and services.

Grafana Enterprise Stack

Reporting is a feature of Grafana Enterprise Stack.  According to the Grafana Enterprise website, two distinct capabilities of reporting are available in the Enterprise Stack.

  • Automatically generate PDFs from any dashboards and have it emailed to interested parties on a schedule
  • Generate PDFs from any of your dashboards and save it to file
Generate PDF

Reporting in Grafana Enterprise Stack includes the following features:

  • Create and update PDF reports
  • You can customize reports in 4 different types of layouts
    • Simple – Portrait: Portrait style PDF with 3 panels per page
    • Simple – Landscape: Landscape style PDF with 1 panel per page
    • Grid – Portrait: All the dashboard panels are laid out with similar layout in a single Portrait style PDF page
    • Grid – Landscape: All the dashboard panels are laid out with similar layout in a single Landscape style PDF page
  • Schedule reports to be emailed out on an hourly, daily, monthly basis.  You can also choose to save the reports to file and not email them.
  • Timerange can be customized for the reports
  • Reports can be branded with company logo, email footer, footer text and url.  
  • You can use API to generate or pause reports
  • Scheduling of reports is limited to administrators.  

Let’s look at the pros and cons of the reporting capability in Grafana Enterprise:

The Advantages of Reporting in Grafana Enterprise Stack

  • Included with Grafana Enterprise. It is easy to install and set up
  • It serves the purpose of sending out dashboards
  • It provides basic customization, scheduling, branding capabilities
  • It keeps pace with the latest versions of Grafana.

The Disadvantages of Reporting in Grafana Enterprise Stack

While Grafana Enterprise Stack offers some significant benefits, it has a few drawbacks that are outlined below:

  • The report layouts are limited to 4 different types.  You can’t create a report with your layout for various panels.  
  • It doesn’t offer templates for creating your own branded reports.  You can only have only one type of report with a logo, footer etc. 
  • Notification is limited to email SMTP channel
  • It does not offer Excel/CSV reporting
  • Grafana variables are supported in v7.5 and above only. It would allow you to override the dashboard variables with custom values.  
  • Burst reporting is not supported.  You can’t generate multiple personalized reports for several recipients from a single dashboard in one report definition. 
  • Scheduling capability is basic.  You can’t schedule reports for work days, yearly reports etc.
  • Error handling is missing.  If a report fails to generate for some reason, there is no way to inform someone to take a look at the root cause.   
  • Last, but not the least, you need to purchase a Grafana Enterprise Stack license to get reporting.  It’s a minimum commitment of $3,500 per month.  

Summary of Grafana Enterprise Stack Reporting

Grafana Enterprise Stack Reporting is a good option for your Grafana reporting needs if the following criteria applies to you:

  • You have a need for the other features in Grafana Enterprise Reporting so that you are ok to spend $3,500 or more per month.  
  • You just need a simple reporting option to send out the dashboards as-is to a limited number of internal users. The users do not require any customization of reports.
  • You do not need any Excel/CSV reporting
  • You do not have a need for any of the features mentioned in the previous section. 

Skedler Reports for Grafana

Skedler Reports is an enterprise reporting automation tool for Grafana and Elasticsearch-Kibana.  It was originally developed to provide reporting option to Elastic Stack.  When customers started asking for Grafana support, the Skedler team added Grafana Reporting to its offering.  It was the first reporting tool developed for Elastic Stack and Grafana and is widely used by Grafana users.

Skedler Reports for Grafana

Skedler Reports offers the following capabilities:

  • Create and update PDF, Excel, CSV and HTML reports from Grafana dashboard panels
  • Download, save, and schedule automatic distribution of reports via email or slack channels
  • Customize PDF reports with flexible layouts,  smart layout and dashboard layout
  • Personalize reports to recipients by using burst reporting.
  • Pause/resume of schedules
  • View history of generated reports
  • Error handling to inform administrators when attention is needed.
  • Compatible with the latest versions of Grafana.  New versions are released with 2-4 weeks of the Grafana update. Continuous support since the early versions of ELK and Grafana
  • It’s a no-code, UI driven solution.
  • API available
versions-of-Grafana

Now, let’s look at the pros and cons of the Skedler Reports for Grafana:

The Advantages of Skedler Reports for Grafana

  • Quick to install and configure
  • Works with older and latest versions of Grafana
  • Support for not just PDFs, but also Excel/CSV/HTML reports
  • Layouts can be customized with rich templates to drive the value of data.
  • Templates can be used to project branding
  • You can send personalized reports to different recipients from a single dashboard
  • Flexibility in scheduling and distribution
  • Multiple deployment options such as docker
  • Works with Grafana On-Prem and Grafana Cloud
  • Can be used by both administrators and end users
  • Robust error handling and notification mechanisms
  • Subscription starts at $99 per month for 100 reports when paid annually.  Also available at 250/500/1000/Custom/Unlimited report options.  Allows you to start small and pay as you grow.
skedler report

The Disadvantages of Skedler Reports for Grafana

  • At this time, Skedler is not available as a plugin inside Grafana.  Skedler is deployed as a standalone application that can be used for reporting from one or more Grafana or Elasticsearch-Kibana instances.  
  • Requires a separate installation in addition to the Grafana. 
  • Cloud option is not available until Q3 2021.

Summary of Skedler Reports for Grafana

Skedler Reports for Grafana is a great option for your Grafana reporting needs if the following criteria applies to you:

  • You just need Grafana Reporting and do not need the features available in Grafana Enterprise Stack.  
  • You need a robust reporting solution with customization, personalization options and flexibility.
  • You need a cost-efficient solution that fits your budget.
  • You prefer a solution that is supported so that you can get help when needed.

Reporter

Reporter is a simple web service that generates PDF reports from Grafana dashboards.  It is an open source solution and is a plugin to Grafana.  It requires the installation of PDFLatex and is written in goLang.

Reporter offers the following capabilities:

  • Create and update PDF reports from within Grafana dashboard
  • You can customize reports in a grid layout or two panels per layout.
  • You can customize layouts using LaTex
  • Timerange can be customized for the reports
  • You can use API to generate reports
  • Does not include scheduling and emailing options
demo

Let’s look at the advantages and disadvantages of the Reporter:

The Advantages of Reporter

  • It serves the purpose of generating PDF files from Grafana dashboards
  • It provides basic customization capabilities using LaTex
  • It is free and open source.

The Disadvantages of Reporter

While the open source aspect of Reporter is attractive, it has several drawbacks.

  • Updates to the Reporter are rare.  The last significant update was in Nov 2019.  Since then Grafana has released more than 7 updates
  • Support is limited and the user community is small.  
  • The report layouts are limited to 2 different types.  You can’t create a report with your layout for various panels.  
  • You need to use LaTex to create templates for creating your own branded reports.
  • No notification channels
  • No scheduling options
  • It does not offer Excel/CSV reporting 
  • Burst reporting is not supported.  You can’t generate multiple personalized reports for several recipients from a single dashboard in one report definition. 

Summary of Reporter

Reporter is your option for Grafana reporting if the following criteria applies to you:

  • You do not have absolutely any budget for reporting but can only allocate your time.  
  • You just need a simple PDF generation option in Grafana
  • You do not need any report scheduling or emailing capability
  • You do not need any Excel/CSV reporting
  • You do not need any of the missing features mentioned in the previous section.

Need an Awesome Grafana Reporting Solution?

Grafana-Reporting-Solution

We think we have built an awesome solution in Skedler Reports for your Grafana Reporting need. And, we would like to get your feedback on it!

Why not dig deeper into Skedler Reports so that you can hammer away the data silos, effortlessly deliver reports to your stakeholders, and chill in the admiration that you receive from your users for your awesomeness!

Check out Skedler Reports today!

If you are looking for a Grafana reporting solution, be sure to test drive Skedler.

Kibana Single Sign-On with OpenId Connect and Azure Active Directory

Introduction

Open distro supports OpenID so you can seamlessly connect your Elasticsearch cluster with Identity Providers like Azure AD, Keycloak, Auth0, or Okta. To set up OpenID support, you just need to point Open distro to the metadata endpoint of your provider, and all relevant configuration information is imported automatically. In this article, we will implement a complete OpenID Connect setup including Open distro for Kibana Single Sign-On.

What is OpenID Connect?

OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.

OpenID Connect allows clients of all types, including Web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users. The specification suite is extensible, allowing participants to use optional features such as encryption of identity data, the discovery of OpenID Providers, and session management, when it makes sense for them.

Configuring OpenID Connect in Azure AD

Next, we will set up an OpenID Connect client application in Azure AD which we will later use for Open Distro for Elasticsearch Kibana Single Sign-On. In this post, we will just describe the basic steps.

Adding an OpenID Connect client application

Our first step is, we need to register an application with the Microsoft identity platform that supports OpenID Connect. Please refer to the official documentation.

Login to azure ad and open the Authentication tab in-app registrations and enter the redirect URL as https://localhost:5601/auth/openid/login and save it.

redirect URL – https://localhost:5601/auth/openid/login

Besides the client ID, we also need the client secret in our Open Distro for elasticsearch Kibana configuration. This is an extra layer of security. An application can only obtain an id token from the IdP if it provides the client secret. In Azure AD you can find it under the Certificates & secrets tab of the client settings.

Connecting OpenDistro with Azure AD

For connecting Open Distro with Azure AD we need to set up a new authentication domain with type openid in config.yml. The most important information we need to provide is the Metadata Endpoint of the newly created OpenID connect client. This endpoint provides all configuration settings that Open Distro needs. The URL of this endpoint varies from IdP to IdP. In Azure AD the format is:

openId end point IDP – https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/v2.0/.well-known/openid-configuration

Since we want to connect Open Distro for Elasticsearch Kibana with Azure AD, we also add a second authentication domain which will use the internal user database. This is required for authenticating the internal Kibana server user. Our config.yml file now looks like:

authc: 

          basic_internal_auth_domain: 

              http_enabled: true 

              transport_enabled: true 

              order: 0 

              http_authenticator: 

                 type: “basic” 

                 challenge: false 

              authentication_backend: 

                 type: “internal” 

          openid_auth_domain: 

              enabled: true 

              order: 1 

              http_authenticator: 

                 type: openid 

                 challenge: false 

                 config: 

                     subject_key: preferred_username 

                     roles_key: roles 

                     openid_connect_url: https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-

xxxxxxxx/v2.0/.well-known/openid-configuration 

              authentication_backend: 

                   type: noop

Adding users and roles to Azure AD

While an IDP can be used as a federation service to pull in user information from different sources such as LDAP, in this example we use the built-in user management. We have two choices when mapping the Azure AD users to Open Distro roles. We can do it by username, or by the roles in Azure AD. While mapping users by name is a bit easier to set up, we will use the Azure AD roles here.

With the default configuration, two appRoles are created, skedler_role and guidanz_role, which can be viewed by choosing the App registrations menu item within the Azure Active Directory blade, selecting the Enterprise application in question, and clicking the Manifest button

A manifest is a JSON object that looks similar to:

“appId”: “xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx”,

“appRoles”: [

  {

   “allowedMemberTypes”: [

    “User”

   ],

   “description”: “Skedler with administrator access”,

   “displayName”: “skedler_role”,

   “id”: “xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx”,

   “isEnabled”: true,

   “value”: “skedlerrole”

  },

           {

   “allowedMemberTypes”: [

    “User”

   ],

   “description”: “guidanz with readonly access”,

   “displayName”: “guidanz_role”,

   “id”: “xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx”,

   “isEnabled”: true,   

   “value”: “guidanzrole”

  },

         ], … etc.  

 }

There are many different ways we might decide to map how users within AAD will be assigned roles within Elasticsearch, for example, using the tenantid claim to map users in different directories to different roles, using the domain part of the name claim, etc.

With the role OpenID connect token attribute created earlier, however, the appRole to which an AAD user is assigned will be sent as the value of the Role Claim within the OpenID connect token, allowing:

  • Arbitrary appRoles to be defined within the manifest
  • Assigning users within the Enterprise application to these roles
  • Using the Role Claim sent within the SAML token to determine access within Elasticsearch.

For the purposes of this post, let’s define a Superuser role within the appRoles:

{

  “appId”: “<guid>”,

  “appRoles”: [

    {

      “allowedMemberTypes”: [

        “User”

      ],

      “displayName”: “Superuser”,

      “id”: “18d14569-c3bd-439b-9a66-3a2aee01d14d”,

      “isEnabled”: true,

      “description”: “Superuser with administrator access”,

      “value”: “superuser”

    },

    … other roles

  ],

  … etc.

And save the changes to the manifest:

Configuring OpenID Connect in Open Distro for Kibana

The last part is to configure OpenID Connect in Open Distro for Kibana. Configuring the Kibana plugin is straight-forward: Choose OpenID as the authentication type, and provide the Azure AD metadata URL, the client name, and the client secret. Please refer to the official documentation.

Activate OpenID Connect by adding the following to kibana.yml:

opendistro_security.auth.type: “openid”

opendistro_security.openid.connect_url: “https://login.microsoftonline.com/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx/v2.0/.well-known/openid-configuration”

opendistro_security.openid.client_id: “xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx” 

opendistro_security.openid.client_secret: “xxxxxxxxxxxxxxxxxxxxxxxxxxx”

opendistro_security.openid.base_redirect_url: “https://localhost:5601”

Done. We can now start Open Distro for Kibana and enjoy Single Sign-On with Azure AD! If we open Kibana, we get redirected to the login page of Azure AD. After providing username and password, Kibana opens, and we’re logged in.

Summary

OpenID Connect is an industry-standard for providing authentication information. Open Distro for Elasticsearch and their Open Distro for Kibana plugin support OpenID Connect out of the box, so you can use any OpenID compliant identity provider to implement Single Sign-On in Kibana. These IdPs include Azure AD, Keycloak, Okta, Auth0, Connect2ID, or Salesforce.

Reference

If you wish to have an automated reporting application, we recommend downloading  Skedler Reports.

Installing, configuring Skedler Reports as Kibana Plugin with Elasticsearch and Kibana Environment using Docker Compose

Introduction

If you are using ELK stack, you can now install Skedler as a Kibana plugin. Skedler Reports plugin is available for Kibana versions from 6.5.x to 7.6.x.

Let’s take a look at the steps to Install Skedler Reports as a Kibana plugin.

Prerequisites:

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

Let’s get started!

Login to your Linux machine and update the repository and install Docker and Docker Compose. Then follow the below steps to update the Repository:

Setting Up Skedler Reports

Create a Directory, say skedlerplugin

ubuntu@guidanz:~$ mkdir skedlerplugin

ubuntu@guidanz:~$ cd skedlerplugin/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, create a Docker Compose file for Skedler Reports. You also need to create a Skedler Reports configuration file, reporting.yml, and a Docker Compose file for Skedler as below,

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

volumes:

  reportdata:

    driver: local

networks: {stack: {}}

Create an Elasticsearch configuration file – reporting.yml and paste the config as below.

ubuntu@guidanz:~$ mkdir skedlerplugin

ubuntu@guidanz:~$ cd skedlerplugin/

ubuntu@guidanz:~$ vim reporting.yml

Download the reporting.yml file found here

Setting Up Elasticsearch

You also need to create an Elasticsearch configuration file, elasticsearch.yml. Docker Compose file for Elasticsearch is below,

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.6.0”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

volumes:

  esdata:

    driver: local

networks: guidanz

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

cluster.name: guidanz-stack-cluster

node.name: node-1

network.host: 0.0.0.0

path.data: /usr/share/elasticsearch/data

http.port: 9200

xpack.monitoring.enabled: true

http.cors.enabled: true

http.cors.allow-origin: “*”

http.max_header_size: 16kb

Setting Up Skedler Reports as Kibana Plugin

Create a Directory inside skedlerplugin, say kibanaconfig

ubuntu@guidanz:~$ mkdir kibanaconfig

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim Dockerfile

Now, create a Docker file for Kibana and check the Docker file for Kibana as below,

FROM docker.elastic.co/kibana/kibana:7.6.0

RUN ./bin/kibana-plugin install https://www.skedler.com/plugins/skedler-reports-plugin/4.10.0/skedler-reports-kibana-plugin-7.6.0-4.10.0.zip

Then, copy the URL of the Skedler Reports plugin matching your exact Kibana version from here.

You also need to create a Docker Compose file for Kibana is below,

#Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    build:

      context: ./kibanaconfig

      dockerfile: Dockerfile

    image: kibanaconfig

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./kibanaconfig/kibana.yml:/usr/share/kibana/config/kibana.yml

      – ./kibanaconfig/skedler_reports.yml:/usr/share/kibana/plugins/skedler/config/skedler_reports.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

Create a Kibana configuration file kibana.yml inside the kibanaconfig folder and paste the config as below.

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim kibana.yml

server.port: 127.0.0.1:5601

elasticsearch.url: “http://elasticsearch:9200”

server.name: “full-stack-example”

xpack.monitoring.enabled: true

Create a Skedler Reports as Kibana Plugin configuration file skedler_reports.yml inside the kibanaconfig folder and paste the config as below.

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim skedler_reports.yml

#/*********** Skedler Access URL *************************/

skedler_reports_url: “http://ip_address:3000”

#/*********************** Basic Authentication *********************/

# If Skedler Reports uses any username and password

#skedler_username: user

#skedler_password: password

Configure the Skedler Reports server URL in the skedler_reports_url variable. By default, the variable is set as shown below,

If the Skedler Reports server URL requires basic authentication, for example, Nginx, uncomment and configure the skedler_username and skedler_password with the basic authentication credentials as shown below: Now run the docker-compose.

ubuntu@guidanz:~/skedlerplugin$ docker-compose up -d

Access Skedler Reports the IP and Port and you will see the Skedler Reports UI.

| http://ip_address:3000

Access Elasticsearch the IP and Port and you will see the Elasticsearch UI.

| http://ip_address:9200

Access Kibana using the IP and Port and you will see the Kibana UI.

| http://ip_address:5601

So now the Composite docker-compose file will look like below,

You can Simply do compose up and down.

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

#  Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.1.1”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: ${ES_MEM_LIMIT}

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

 #Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    build:

      context: ./kibanaconfig

      dockerfile: Dockerfile

    image: kibanaconfig

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./kibanaconfig/kibana.yml:/usr/share/kibana/config/kibana.yml

      – ./kibanaconfig/skedler_reports.yml:/usr/share/kibana/plugins/skedler/config/skedler_reports.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

volumes:

  esdata:

    driver: local

  reportdata:

    driver: local

networks: {stack: {}}

You can Simply do compose up and down.

ubuntu@guidanz:~/skedlerplugin$ docker-compose down 

ubuntu@guidanz:~/skedlerplugin$ docker-compose up -d

Summary

Docker compose is a useful tool to manage container stacks for your client. And manage all related containers with one single command.

Excel Reports for Grafana is now available in Skedler v4.9

In Skedler’s Reporting, we have export options like PDF, PNG, and HTML that use Grafana Data Source. Using these formats, we need to use the Grafana dashboard screen capture that is built into each organization. Some users have difficulty accessing all the data, some may be searching for a data table to change or update at any point in time.

The latest version of Skedler v4.9.0 is now provided by Grafana in Excel reporting. The captured data in the Grafana dashboard is converted into a data table to display the overall detailed report in the editable excel format. 

Export Your Grafana Excel/CSV Report in Minutes with Skedler. Fully featured free trial.

Monitoring Servers and Docker Containers using Elasticsearch with Grafana

Introduction

Infrastructure monitoring is the basis for application performance management. The underlying system’s availability and health must be maximized continually. To achieve this, one has to monitor the system metrics like CPU, memory, network, and disk. Response time lag, if any must be addressed swiftly. Here we’ll take a look at how to Monitor servers (and even Docker Containers running inside the Server) using Grafana, Elasticsearch, Metricbeat, and Skedler Reports.

Core Components

Grafana-Analytics & monitoring solution for database

Elasticsearch-Ingest and index logs

Metricbeat-Lightweight shipper for metrics

Skedler Reports –Automate actionable reports

Grafana — Analytics & monitoring solution for database

Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data-driven culture.

Elasticsearch-Ingest and index logs

Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.

Metricbeat — Lightweight shipper for metrics

Collect metrics from your systems and services. From CPU to memory, Redis to NGINX, and much more, Metricbeat is a lightweight way to send system and service statistics.

Skedler Reports — Automate actionable reports

Skedler offers the most powerful, flexible and easy-to-use data monitoring solution that companies use to exceed customer SLAs, achieve compliance, and empower internal IT and business leaders.

Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

Prerequisite,

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

ubuntu@guidanz:~$ mkdir monitoring

ubuntu@guidanz:~$ cd monitoring/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, Create a Docker Compose file for Elasticsearch, You also need to Create an Elasticsearch configuration file, elasticsearch.yml Docker Compose file for Elasticsearch is below,

Note: We will keep on extending the same docker file as we will move ahead to install other components.

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

volumes:

  esdata:

    driver: local

networks: guidanz

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

cluster.name: guidanz-stack-cluster

node.name: node-1

network.host: 0.0.0.0

path.data: /usr/share/elasticsearch/data

http.port: 9200

xpack.monitoring.enabled: true

http.cors.enabled: true

http.cors.allow-origin: “*”

http.max_header_size: 16kb

Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Access Elasticsearchusing the IP and Port and you will see the Elasticsearch UI.

http://ip_address:9200

Now We will setup the Metricbeat. It is one of the best components used along with the Elasticsearch to capture metrics from the server where the Elasticsearch is running. It Captures all hardware and kernel-related metrics like system-level CPU usage, memory, file system, disk IO, and network IO statistics, as well as top-like statistics for every process running on your systems.

To Install the Metricbeat, simply append the docker-compose.yml file, metricbeat.yml, and modules.d file as below.

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

Append the metricbeat.yml as below,

metricbeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.period: 5s

  reload.enabled: true

processors:

– add_docker_metadata: ~

monitoring.enabled: true

setup.ilm.enabled: false

output.elasticsearch:

  hosts: [“elasticsearch:9200”]

logging.to_files: false

setup:

  kibana.host: “kibana:5601”

  dashboards.enabled: true

The compose file consists of the volume mapping to the container, one is the metricbeat configuration and the second one (modules.d) is to Mount the modules.d directory into the container. This allows users to potentially make changes to the modules and they will be dynamically loaded. Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ mkdir modules.d

Append the system.yml as below inside the module.d folder,

– module: system

 metricsets:

   – core

   – cpu

   – load

   – diskio

   – filesystem

   – fsstat

   – memory

   – network

   – process

   – socket

 enabled: true

 period: 5s

 processes: [‘.*’]

 cpu_ticks: true

 process.cgroups.enabled: true

 process.include_top_n:

   enabled: true

   by_cpu: 20

   by_memory: 20

So now the Composite docker-compose file will look like below,

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

volumes:

  esdata:

    driver: local

networks: guidanz

You can Simply do compose up and down.

ubuntu@guidanz:~/monitoring$ docker-compose down 

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Now See the Targets in Elasticsearch, you will see metricbeat as well as a target.

Now eventually we will set up the grafana, where we will be using Elasticsearch as a data source. We can have a better Dashboard in grafana for the metrics visualization.

Append the code in the above docker compose and restart.

grafana:

 image: grafana/grafana

 user: “1000”

 environment:

   – GF_SECURITY_ADMIN_PASSWORD=secure_pass

 volumes:

   – ./grafana_db:/var/lib/grafana

 depends_on:

   – elasticsearch

 ports:

   – ‘3000:3000’

Access grafana UI from 3000 port, default user will be admin and the password you set in the compose file.

Now eventually we will set up the Skedler Reports, where we will be using Grafana as a data source. Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. Please review the documentation to install Skedler.

Now, Setup Skedler Reports, for this append the docker compose with the below code.

reports:

   image: skedler/reports:latest

   container_name: reports

   privileged: true

   cap_add:

     – SYS_ADMIN

   volumes:

     – /sys/fs/cgroup:/sys/fs/cgroup:ro

     – reportdata:/var/lib/skedler

     – ./reporting.yml:/opt/skedler/config/reporting.yml

   ports:

     – 3001:3001

Generate Report from Grafana in Minutes with Skedler. Fully featured free trial.

Monitoring Servers and Docker Containers using Prometheus with Grafana

Introduction

Infrastructure monitoring is the basis for application performance management. The underlying system’s availability and health must be maximised continually. To achieve this, one has to monitor the system metrics like CPU, memory, network, and disk. Response time lag, if any must be addressed swiftly. Here we’lltake a look at how to Monitor servers (and even Docker Containers running inside the Server) using Grafana, Prometheus, Node Exporter, CAdvisor and Skedler Reports.

Core Components

Grafana- Database analytics & monitoring solution

Prometheus- Event monitoring and alerting

Node-Exporter- Monitoring Linux host metrics

Wmi-Exporter- Monitoring Windows host metrics

CAdvisor- Monitoring metrics for the running Containers.

Skedler Reports –Automating actionable reports

Grafana - Database Analytics & monitoring solution 

Grafana equips users to query, visualize,  and monitor metrics, no matter where the underlying data is stored. With Grafana, one can also set alerts for metrics that require attention, apart from creating, exploring, and sharing dashboards with their team and fostering a data-driven culture.

Prometheus -Event monitoring and alerting

Prometheus is an open-source system monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company.

Node Exporter - Monitoring Linux host metrics

Node Exporter is a Prometheus exporter for hardware and OS metrics with pluggable metric collectors. It allows measuring various machine resources such as memory, disk, and CPU utilization

WMI Exporter -Monitoring Windows host metrics

Prometheus exporter for Windows machines, using the WMI (Windows Management Instrumentation).

CAdvisor - Monitoring metrics for the running Containers.

It Stands for Container Advisor and is used to aggregate and process all the metrics for the running Containers.

Skedler Reports — Automating actionable reports

Skedler offers the most powerful, flexible and easy-to-use data monitoring solution that companies use to exceed customer SLAs, achieve compliance, and empower internal IT and business leaders.

Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

Prerequisite,

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

So Let’s get started.

Login to your Linux machine and update the repository and Install Docker and Docker Compose. To Update the Repository,

Create a Directory, say monitoring

ubuntu@guidanz:~$ mkdir monitoring

ubuntu@guidanz:~$ cd monitoring/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, create a Docker Compose file for Prometheus, You also need to create a Prometheus configuration file, prometheus.yml Docker Compose file for Prometheus as below,

Note: We will keep on extending the same docker file as we move forward to install other components.

version: ‘3’

services:

 prometheus:

   image: prom/prometheus:latest

   container_name: prometheus

   volumes:

     – ./prometheus.yml:/etc/prometheus/prometheus.yml

     – ./prometheus_db:/var/lib/prometheus

     – ./prometheus_db:/prometheus

     – ./prometheus_db:/etc/prometheus

     – ./alert.rules:/etc/prometheus/alert.rules

   command:

     – ‘–config.file=/etc/prometheus/prometheus.yml’

     – ‘–web.route-prefix=/’

     – ‘–storage.tsdb.retention.time=200h’

     – ‘–web.enable-lifecycle’

   restart: unless-stopped

   ports:

     – ‘9090:9090’

   networks:

     – monitor-net

Create a Prometheus configuration file and paste the config as below.

global:

 scrape_interval: 5s

 external_labels:

   monitor: ‘guidanz-monitor’

scrape_configs:

 – job_name: ‘prometheus’

   static_configs:

     – targets: [‘monitoring.guidanz.com:9090’] ## IP Address of the localhost

The compose file consists of two volume mappings to the container. One is the Prometheus configuration and the other one (prometheus_db) is to store the Prometheus database locally. Now run the docker-compose.

ubuntu@guidanz:~/monitoring$ mkdir prometheus_db

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Access Prometheus using the IP and Port and you will see the Prometheus UI, http://ip_address:9090

Now we will setup the Node Exporter. It is one of the best components used along with the Prometheus to capture metrics from the server where the Prometheus is running. It Captures all hardware and kernel-related metrics like CPU, Memory, Disk, Disk Read/Write, etc.

To Install the Node exporter, simply append the docker-compose.ymlfile and prometheous.yml file as below.

node-exporter:

 image: prom/node-exporter

 ports:

   – ‘9100:9100’

Append the prometheus.yml as below,

– job_name: ‘node-exporter’

 static_configs:

   – targets: [‘monitoring.guidanz.com:9100’]

So now the Composite docker-compose file will looks like below,

version: ‘3’

services:

 prometheus:

   image: prom/prometheus:latest

   volumes:

     – ./prometheus.yml:/etc/prometheus/prometheus.yml

     – ./prometheus_db:/var/lib/prometheus

   command:

     – ‘–config.file=/etc/prometheus/prometheus.yml’

   ports:

     – ‘9090:9090’

 node-exporter:

 image: prom/node-exporter

 ports:

   – ‘9100:9100’

And prometheus.yml will look like below,

global:

 scrape_interval: 5s

 external_labels:

   monitor: ‘guidanz-monitor’

scrape_configs:

 – job_name: ‘prometheus’

   static_configs:

     – targets: [‘monitoring.guidanz.com:9090’]

 – job_name: ‘node-exporter’

   static_configs:

     – targets: [‘monitoring.guidanz.com:9100’]

Now Create the Node Exporter Container and Restart the Prometheus container using the below commands:

ubuntu@guidanz:~/monitoring$ docker-compose start node-exporter

ubuntu@guidanz:~/monitoring$ docker-compose restart prometheus

Or, One can simply do compose up and down.

ubuntu@guidanz:~/monitoring$ docker-compose down

ubuntu@guidanz:~/monitoring$ docker-compose up -d

Take a look at the targets in Prometheus. You will notice node exporter as well as a target.

Now, setup CAdvisor, for this append the docker compose with below code.

cadvisor:

 image: google/cadvisor:latest

 ports:

   – ‘8080:8080’

 volumes:

   – /:/rootfs:ro

   – /var/run:/var/run:rw

   – /sys:/sys:ro

   – /var/lib/docker/:/var/lib/docker:ro

Also, append the prometheus.yml with a bit code yml code. We are actually adding the CAdvisor service in Prometheus configuration.

– job_name: ‘cAdvisor’

 static_configs:

   – targets: [‘monitoring.guidanz.com:8080’]

Access CAdvisor from the URL, http://IP_Address:8080/docker/

Now eventually we will set up the grafana, where we will be using Prometheus as a data source. We can have a better Dashboard in grafana for the metrics visualization.

Append the code in the above docker compose and restart.

grafana:

 image: grafana/grafana

 user: “1000”

 environment:

   – GF_SECURITY_ADMIN_PASSWORD=secure_pass

 volumes:

   – ./grafana_db:/var/lib/grafana

 depends_on:

   – prometheus

 ports:

   – ‘3000:3000’

Access grafana UI from 3000 port, default user will be admin and the password you set in the compose file.

Now eventually we will set up the Skedler Reports, where we will be using Grafana as a data source. Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. Please review the documentation to install Skedler Reports.

Now, Setup Skedler Reports, for this append the docker compose with the below code.

reports:

   image: skedler/reports:latest

   container_name: reports

   privileged: true

   cap_add:

     – SYS_ADMIN

   volumes:

     – /sys/fs/cgroup:/sys/fs/cgroup:ro

     – reportdata:/var/lib/skedler

     – ./reporting.yml:/opt/skedler/config/reporting.yml

   ports:

     – 3001:3001

Schedule and Automate Your Grafana Reports Free with Skedler. Fully featured free trial.

An easy way to add alerting to Elasticsearch on Kubernetes with Skedler Alerts

There is a simple and effective way to add alerting for your Elasticsearch applications that are deployed to Kubernetes. Skedler Alerts offers no-code alerting for Elasticsearch and reduces the time, effort, and cost of monitoring your machine data for anomalies.   In this article, you are going to learn how to deploy Skedler Alerts for Elasticsearch applications to Kubernetes with ease.

What is Kubernetes?

For those that haven’t ventured into container orchestration, you’re probably wondering what Kubernetes is. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes (“k8s” for short), was a project originally started at, and designed by Google, and is heavily influenced by Google’s large scale cluster management system, Borg. More simply, k8s gives you a platform for managing and running your applications at scale across multiple physical (or virtual) machines.

Kubernetes offers the following benefits:

  • Workload Scalability
  • High Availability
  • Designed for deployment

Deploying Skedler Alerts to Kubernetes

If you haven’t already downloaded Skedler Alerts, please download it from www.skedler.com.  Review the documentation to get started.   

Creating a K8s ConfigMap

Kubernetes ConfigMaps allows a containerized application to become portable without worrying about configurations. Users and system components can store configuration data in ConfigMap. In Skedler Alerts ConfigMaps can be used to store database connection string information such as datastore settings, port number, server information and files locations, log directory etc.

If Skedler Alerts defaults are not enough, one may want to customize alertconfig.yml through a ConfigMap. Please refer to Alertconfig.yml Configuration for all available attributes.

1.Create a file called alerts-configmap.yaml in your project directory and paste the following

alerts-configmap.yaml:

apiVersion: v1

kind: ConfigMap

metadata:

  name: alerts-config

  labels:

   app:  alerts

data:

  alertconfig.yml: |

    —

    #port: 3001

    #host: “0.0.0.0”

    #*******INDEX SETTINGS*********************

    elasticsearch_url: “http://localhost:9200”

    #alert_display_url: “http://localhost:3001”

    #******DATASTORE SETTINGS*****************

    alert_index: “.alert”

    alert_history: “alert_history”

    #alert_history_timestamp: false

    alerts_path: “/opt/alerts”

    #workerCount: 1

    log_dir: “/data/log”

    ui_files_location: “/data/uifiles”

    #*****SECURITY SETTINGS******************

    #To enable Elasticsearch security users in Skedler Alerts set this variable as yes.

    #ESsecurity_user_login: no

    #Type of security plugin x-pack / searchguard  / readonlyrest / opendistro

    #security_plugin: x-pack

    #User Impersonation for x-pack / searchguard / opendistro

    #If configured yes then user impersonation will be enabled

    #user_impersonation: no

    #If Elastic search uses x-pack / search guard / Read Only Rest / any basic auth, add the x-pack user name and password here for alert

    #alert_elasticsearch_username: user

    #alert_elasticsearch_password: pass

    #If elasticsearch behind Ngnix, configure Ngnix username password for elasticsearch here

    #alert_nginx_elasticsearch_username: user

    #alert_nginx_elasticsearch_password: pass

  1. To deploy your configmap, execute the following command

kubectl create -f alerts-configmap.yaml

Creating Deployment and Service

To deploy Skedler Alerts, we’re going to use the “skedler-deployment” pod type. A deployment wraps the functionality of Pods and Replica Sets to allow you to update your application. Now that our Skedler Alerts application is deployed, we need a way to expose it to traffic from outside the cluster. To this, we’re going to add a Service inside the skedler-deployment.yaml file. We’re going to open up a NodePort directly to our application on port 30001.

1.Create a file called alerts-deployment.yaml in your project directory and paste the following

alerts-deployment.yaml:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: skedler-alerts

  labels:

    app: alerts

spec:

  replicas: 1

  selector:

    matchLabels:

      app: alerts

  template:

    metadata:

      labels:

        app: alerts

    spec:

      containers:

      – name: alerts

        image: skedler/alerts:latest

        imagePullPolicy: Always

        command: [“/opt/alert/bin/alert”]

        ports:

        – containerPort: 3001

        volumeMounts:

        – name: skedler-alerts-storage

          mountPath: /data

        – name: alerts-config

          mountPath: /opt/alert/config/alertconfig.yml

          subPath: alertconfig.yml

      volumes:

      – name: skedler-alerts-storage

      – name: alerts-config

        configMap:

          name: alerts-config

apiVersion: v1

kind: Service

metadata:

  name: alerts

  labels:

    app: alerts

spec:

  selector:

    app: alerts

  ports:

  – port: 3001

    protocol: TCP

    nodePort: 30001

  type: LoadBalancer

2. For deployment, execute the following command,

kubectl create -f alerts-deployment.yaml

3. To get your deployment with kubectl, execute the following command,

kubectl get deployments

4. We can get the service details by executing the following command,

kubectl get services

Now, Skedler Alerts will be deployed in 30001 port.


Accessing Skedler Alerts

Skedler Alerts can be accessed from the following URL, http://<hostIP>:30001

To learn more about creating Skedler Alerts, visit Skedler documentation site.

Summary

This blog was a very quick overview of how to get Skedler Alerts for Elasticsearch application up and running on Kubernetes with the least amount of configuration possible. Kubernetes is an incredibly powerful platform that has many more features than we used today.  We hope that this article gave a headstart and saved you time.

An easy way to add reporting to Elasticsearch Kibana 7.x and Grafana 6.x on Kubernetes with Skedler Reports

There is a simple and effective way to add reporting for your Elasticsearch Kibana 7.x (including Open Distro for Elasticsearch) or Grafana 6.x applications that are deployed to Kubernetes. In this part of the article, you are going to learn how to deploy Skedler Reports for Elasticsearch Kibana and Grafana applications to Kubernetes with ease.

What is Kubernetes?

For those that haven’t ventured into container orchestration, you’re probably wondering what Kubernetes is. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes (“k8s” for short), was a project originally started at, and designed by Google, and is heavily influenced by Google’s large scale cluster management system, Borg. More simply, k8s gives you a platform for managing and running your applications at scale across multiple physical (or virtual) machines.

Kubernetes offers the following benefits:

  • Workload Scalability
  • High Availability
  • Designed for deployment

Deploying Skedler Reports to Kubernetes

If you haven’t already downloaded Skedler Reports, please download it from www.skedler.com.  Review the documentation to get started.   

Creating a K8s ConfigMap

Kubernetes ConfigMaps allows containerized application to become portable without worrying about configurations. Users and system components can store configuration data in ConfigMap. In Skedler Reports ConfigMaps can be used to store database connection string information such as datastore settings, port number, server information and files locations, log directory etc.

If Skedler Reports defaults are not enough, one may want to customize reporting.yml through a ConfigMap. Please refer to Reporting.yml and ReportEngineOptions Configuration for all available attributes.

1. Create a file called skedler-configmap.yaml in your project directory and paste the following

skedler-configmap.yaml:

apiVersion: v1

kind: ConfigMap

metadata:

  name: skedler-config

  labels:

   app:  skedler

data:

  reporting.yml: |

    —

    #**************BASIC SETTINGS************

    #port: 3000

    #host: “0.0.0.0”

    #*******SKEDLER SECURITY SETTINGS*******

    #skedler_anonymous_access: true

    #Skedler admin username skedlerAdmin

    #Allows you to change Skedler admin password. By default the admin password is set to skedlerAdmin

    #skedler_password: skedlerAdmin

    #*******INDEX SETTINGS***********

    #skedler_index: “.skedler”

    ui_files_location: “/var/lib/skedler/uifiles”

    log_dir: “/var/lib/skedler/log”

    #****************DATASTORE SETTINGS*************

    ####### ELASTICSEARCH DATASTORE SETTINGS ########

    # The Elasticsearch instance to use for all your queries.

    #elasticsearch_url: “http://localhost:9200”

    #skedler_elasticsearch_username: user

    #skedler_elasticsearch_password: pass

    ######## DATABASE DATASTORE SETTINGS ############

    #You can configure the database connection by specifying type, host, name, user and password

    #as separate properties or as on string using the url properties.

    #Either “mysql” and “sqlite”, it’s your choice

    #database_type: “mysql”

    #For mysql database configuration

    #database_hostname: 127.0.0.1

    #database_port: 3306

    #database_name: skedler

    #database_history_name: skedlerHistory

    #database_username: user

    #database_password: pass

    #For sqlite database configuration only, path relative to data_path setting

    #database_path: “/var/lib/skedler/skedler.db”

    #database_history_path: “/var/lib/skedler/skedlerHistory.db”

2. To deploy your configmap, execute the following command,

kubectl create -f skedler-configmap.yaml

2. To deploy your configmap, execute the following command,

kubectl create -f skedler-configmap.yaml

Creating Deployment and Service

To deploy our Skedler Reports, we’re going to use the “skedler-deployment” pod type. A deployment wraps the functionality of Pods and Replica Sets to allow you to update your application. Now that our Skedler Reports application is deployed, we need a way to expose it to traffic from outside the cluster. To this, we’re going to add a Service inside the skedler-deployment.yaml file. We’re going to open up a NodePort directly to our application on port 30000.

  1. Create a file called skedler-deployment.yaml in your project directory and paste the following

skedler-deployment.yaml:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: skedler-reports

  labels:

    app: skedler

spec:

  replicas: 1

  selector:

    matchLabels:

      app: skedler

  template:

    metadata:

      labels:

        app: skedler

    spec:

      containers:

      – name: skedler

        image: skedler/reports:latest

        imagePullPolicy: Always

        command: [“/opt/skedler/bin/skedler”]

        ports:

        – containerPort: 3000

        volumeMounts:

        – name: skedler-reports-storage

          mountPath: /var/lib/skedler

        – name: skedler-config

          mountPath: /opt/skedler/config/reporting.yml

          subPath: reporting.yml

      volumes:

      – name: skedler-reports-storage

      – name: skedler-config

        configMap:

          name: skedler-config

apiVersion: v1

kind: Service

metadata:

  name: skedler

  labels:

    app: skedler

spec:

  selector:

    app: skedler

  ports:

  – port: 3000

    protocol: TCP

    nodePort: 30000

  type: LoadBalancer

2. For deployment, execute the following command,

kubectl create -f skedler-deployment.yaml

3. To get your deployment with kubectl, execute the following command,

kubectl get deployments

4. We can get the service details by executing the following command,

kubectl get services

Now, Skedler will be deployed in 30000 port.

Accessing Skedler

Skedler Reports can be accessed from the following URL, http://<hostIP>:30000

To learn more about creating reports, visit Skedler documentation site.

Summary

This blog was a very quick overview of how to get a Skedler Reports for Elasticsearch Kibana 7.x and Grafana 6.x application up and running on Kubernetes with the least amount of configuration possible. Kubernetes is an incredibly powerful platform that has many more features than we used today.  We hope that this article gave a headstart and saved you time.

Simplifying Skedler Reports with Elasticsearch and Kibana Environment using Docker Compose

Docker compose is a tool for defining and running multi-container (Skedler Reports, Elasticsearch and Kibana) Docker applications.  With Compose, you use a YAML file to configure your application’s services. Then with a single command, you create and start all the services from your configuration.

In this section, I will describe how to create a containerized installation for Skedler Reports, Elasticsearch and Kibana.

Benefits:

  • You describe the multi-container set up in a clear way and bring up the containers in a single command.
  • You can define the priority and dependency of the container on other containers.

Step-by-Step Instruction:

Step 1: Define services in a Compose file:

Create a file called docker-compose.yml in your project directory and paste the following

docker-compose.yml:

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.1.1”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: ${ES_MEM_LIMIT}

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

 #Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    image: “docker.elastic.co/kibana/kibana:7.1.1”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./config/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

volumes:

  esdata:

    driver: local

  reportdata:

    driver: local

networks: {stack: {}}

This Compose file defines three services, Skedler Reports, Elasticsearch and Kibana.

Step 2: Basic configurations using reporting.yml and kibana.yml

Create a files called reporting.yml in your project directory.

Getting the reporting.yml file found here

Note: For more configuration options kindly refer the article reporting.yml and ReportEngineOptions Configuration

Create a files called kibana.yml in your project directory.

Note: For more configuration options kindly refer the article kibana.yml

Step 3: Build and run your app with docker-compose

From your project directory, start up your application by running

sudo docker-compose up -d

Compose pulls a Skedler Reports, Elasticsearch and Kibana images, builds an image for your code, and starts the services you defined

Skedler Reports is available at http://<hostIP>:3000,  Elasticsearch is available at http://<hostIP>:9200 and Kibana is available at http://<hostIP>:5601 .

Summary

Docker compose is a useful tool to manage container stacks for your client. And manage all related containers with one single command.

Copyright © 2023 Guidanz Inc
Translate »