How to defeat downtime with Observability?

Introduction

In today’s world, the essential ingredient for the success of an organization is the ability to reduce downtime. If not handled properly, it interrupts the company’s growth, impacts customer satisfaction, and could result in significant monetary losses. Resolutions can also be difficult when the correct data is unavailable, thus prolonging the downtime. This affects the SLA and decreases the product’s reliability in the market.

The best way to deal with downtime is to avoid its occurrence. Data teams should have access to tools and measures to prevent such an incident by detecting it even before it happens. This kind of transparency can be achieved using Observability. By implementing Observability, teams can manage the health of their data pipeline and dramatically reduce downtime and resolution time.

What is Observability? 

Introduction to Observability

Observability is the ability to measure the internal status of a system by examining its outputs. A system is highly observable if it does not require additional coding and services to assess and analyze what’s going on. During downtime, it is of utmost importance to determine which part of the system is faulty at the earliest possible time. 

Three Pillars of Observability

Three Pillars of Observability

The three pillars that must be considered simultaneously to obtain Observability are logs, metrics, and traces. When you combine these three “pillars,” a remarkable ability to understand the whole state of your system emerges. Let us learn more about these pillars:

Logs are the archival records of your system functions and errors. They are always time-stamped and come in either binary or plain text and a structured format that combines text and metadata. Logs allow you to look through and see what went wrong and where within a system.

Metrics can be a wide range of values monitored over some time. Metrics are often vital performance indicators such as CPU capacity, memory usage, latency, or anything else that provides insights into the health and performance of your system. The changes in these metrics allow teams to understand the system’s end performance better. Metrics offer modern businesses a measurable means to improve the user experience.

Traces are a method to follow a user’s journey through your application. Trace documents the user’s interaction and requests within the system, starting from the user interface to the backend systems and then back to the user once their request is processed. 

A system’s overall performance can be maintained and enhanced by implementing the three pillars of Observability, i.e., logs, metrics, and traces. As distributed systems become more complex, these three pillars give IT, DevSecOps, and SRE teams the ability to access real-time insight into the system’s health. Areas of degrading health can be prioritized for troubleshooting before impacting the system’s performance. 

What are the benefits of Observability?

Observability tools are not only a requirement but a necessity in this fast-paced data-driven world. Key benefits of Observability are:

  1. Detecting an anomaly before it impacts the business, thus preventing monetary losses.
  2. Speed up resolution time and meet customer SLAs
  3. Reduce repeat incidents
  4. Reduce escalations
  5. Improve collaboration between data teams (engineers, analysts, etc.)
  6. Increase trust or reliability in data
  7. Quicker decision making

Observability Use-cases

Observability is essential because it gives you greater control over complex systems. Simple systems have fewer moving parts, making them easier to manage. Monitoring CPU, memory, databases, and networking conditions are usually enough to understand these systems and apply the appropriate fix.

Distributed systems have a far higher number of interconnected parts, so the number and types of failure are also higher. Additionally, distributed systems are constantly updated, and every change can create a new kind of failure. Understanding a current problem is an enormous challenge in a distributed environment, mainly because it produces more “unknown unknowns” than simpler systems. Because monitoring requires “known unknowns,” it often fails to address problems in these complex environments adequately.

Observability is better suited for the unpredictability of distributed systems, mainly because it allows you to ask questions about your system’s behavior as issues arise. “Why is X broken?” or “What is causing latency right now?” are a few questions that Observability can answer.

SREs often waste valuable time combing through heaps of data and identifying what matters and requires action. Rather than slowing down all operations with tedious, manual processes, Observability provides automation to identify which data is critical so SREs can quickly take action, dramatically improving productivity and efficiency, rather than slowing down all operations with tedious, manual processes.

Best practices to implement Observability

  • Monitor what matters most to your business to not overload your teams with alerts.
  • Collect and explore all of your telemetry data in a unified platform.
  • Determine the root cause of your application’s immediate, long-term, or gradual degradations.
  • Validate service delivery expectations and find hot spots that need focus.
  • Optimize the feedback loop between issue detection and resolution.

Observability tools

Features to consider while choosing the right tool

Observability tools have become critical to meeting operational challenges at scale. To get the best out of Observability implementation, you will need a reliable tool that enables your teams to minimize toil and maximize automation. Some of the key features to consider while choosing an application are:

  • Core features offered
  • Initial set-up experience
  • Ease of use 
  • Pricing
  • Third-party integrations
  • After-sales support and maintenance

List of tools

Considering the above factors, we have compiled a list of effective observability tools that can offer you the best results:

  • ContainIQ
  • SigNoz
  • Grafana Labs
  • DataDog
  • Dynatrace
  • Splunk
  • Honeycomb
  • LightStep
  • LogicMonitor
  • New Relic

Reporting for Observability

Skedler Reports helps Observability and SOC teams automate stakeholder reports in a snap without breaking the budget.

Reports

Reporting for Observability

With effective observability tools, you also need a reliable reporting tool that can deliver professional reports from these tools to your stakeholders regularly on time. If you use Grafana for Observability or Elastic Stack for SIEM, check out Skedler Reports. 

Skedler Reports helps Observability and SOC teams automate stakeholder reports in a snap without breaking the budget. You can test-drive Skedler for free and experience its value for your team. Click here to download Skedler Reports.

Is observability the future of systems monitoring?

As the pressure increases to resolve issues faster and understand the underlying cause of the problem, IT and DevOps teams need to go beyond reactive application and system monitoring.

They will need to dig deeper into the tiniest technical details of every application, system, and endpoint to witness the real-time performance and previous anomalies to correct repeat incidents.

A mature observability strategy can give you an insight into previous unknowns and help you more quickly understand why incidents occur. And as you continue on your observability journey and understand what and why things break, you’ll be able to implement increasingly automated and effective performance improvements that impact your company’s bottom line.

Three Pillars of Observability – Traces ( Part 3)

Introduction

The ability to measure a system’s internal state is observability. It helps us understand what’s happening within the system by looking at specific outputs or data points. It is essential, especially when considering complex, distributed systems that power many apps and services today. Its benefits are better workflow, improved visibility, faster debugs and fixes, and agility.

Observability depends on three pillars: logs, metrics, and traces. Hence, the term also refers to multiple tools, processes, and technologies that make it possible. We have already touched upon logs and metrics, and this article will cover the last pillar, traces.

Understanding Traces

The word ‘Trace’ refers to discovery by investigation or to finding a source, for example, tracing the origin of a call. Here too, the term refers to something similar. It is the ability to track user requests fully through a complex system. It differs from a log. A log may only tell us something went wrong at a certain point. However, a trace goes back through all the steps to track the exact instance of error or exception. 

It is more granular than a log and a great tool to understand and sorting bottlenecks in a distributed system. Traces are ‘spans’ that track user activity through a distributed system (microservices). It does this with the help of a unique universal identifier that travels with the data to keep track of it.

Multiple spans form a trace that can be represented pictorially as a graph. One of the most common frameworks used for Traces is OpenTelemetry, created from OpenCensus and OpenTracing.

Why do we need to use Traces?

Traces help us correct failures provided we are using the right tools. Tracks are life-savers for admin and DevOps teams responsible for monitoring and maintaining a system. They can understand the path the user request takes to see where the bottlenecks happened and why to decide what corrective actions need to be taken. 

While metrics and logs provide adequate information, traces go a step better to give us context to better understand and utilize these pillars.

Traces provide crucial visibility to the information that makes it more decipherable.

They are better suited for debugging complex systems and answering many essential questions related to their health. For example, to identify which logs are relevant, which metrics are most valuable, which services need to be optimized, and so on.

Software tracing has been around for quite some time. However, distributed tracing is the buzzword in the IT industry these days. It works across complex systems that span over Cloud-based environments that provide microservices.

Therefore, we cannot pick one over the other from the three observability pillars. Traces work well along with metrics and logs, providing much-needed overall efficiency. That is what observability is all about, to keep our systems running smoothly and efficiently.

Limitations

Implementing traces in systems is a complex and tedious task, especially considering most are distributed. It might involve codes across many places, which could be challenging for DevOps personnel. Every piece of data in a user request must be traced through and through. Implementing it across multiple frameworks, languages, etc., makes the task more challenging.

Also, tracing can be an issue if you have many third-party apps as part of your distributed system. However, proper planning, usage of compatible tools that support custom traces, monitoring the right metrics, etc., can go a long way in overcoming these.

The Skedler advantage

As we have already seen, if we have to make good use of the three pillars of observability, we need to rely on some good tools. We need a reliable reporting tool if we need good visualization from traces based on the information it has access to. That’s where Skedler comes in. 

Skedler works with many components in the DevOps ecosystem, such as the ELK stack and Grafana, making it easier to achieve observability. The Skedler 5.7.2 release supports distributed tracing, the need of the hour. It performs with a new panel editor and a unified data model.

Skedler gives an edge by leveraging the best from the underlying tools to provide you with incredible visualized outputs. These reports help you make sense of the multitude of logs, metrics, traces, and more. They give you enriched insights into your system to keep you ahead. Thus, it helps ensure a stable, high-availability system that renders a great customer experience.

Conclusion

In conclusion, we could say that observability is a key aspect of maintaining distributed systems. Keeping track of the three pillars of observability is critical – logs, metrics, and traces. Together, they form the pivotal backbone of a healthy system and a crucial monitoring technique for all system stakeholders.

While multiple tools are available for this purpose, a crucial one would be to provide you with unmissable clarity on the system’s health. A good observability tool should generate, process, and output telemetry data with a sound storage system that enables fast retrieval and long-term retention. Using Skedler can help you deliver automated periodic visualized reports to distributed stakeholders prompting them to take necessary action.

What’s New in Skedler

The release of Skedler in November came with many improvements, such as auto-scaling support for Grafana dashboard layout reports and an updated user interface. In the December release, we came up with more features like Autoscaling support for charts in Kibana and the option to configure proxy URL. We are very proud of these releases, but the team is always looking forward to new ways of making Skedler better for you. We are already improving our product further and wanted you to know about our newly added features and UI.So, before we end the year, we want to update you on the features we released and go through some of the important ones in this blog.

Halt your reporting schedules for Specific Days

Want to make sure you are not sending your reports on a holiday? We got you covered! You can now choose the days you do not wish to schedule reports with our new Weekday feature.

Weekday feature

Autoscaling support for charts in Kibana

Skedler now supports autoscaling of charts in Kibana. You do not have to worry about your reports being messy or missing out on important information when you add more data to your chart because Skedler will automatically take care of that.

Autoscaling in Kibana

Added an auto-scaling support for Grafana dashboard layout reports 

You can now stop worrying about your graphs and modules getting distorted in your reports as Skedler has added auto-scaling support for generating reports from Grafana Dashboard.

Autoscaling in Grafana

 Added a privilege to super admin users to change their email id

Super Admins can now update their email ID in their profile. You can add a new Mail ID instead of the one you used when you opened your account.

Super Admin User

 Generate reports using Grafana dashboard timezone

You can now generate reports in Skedler as per your Grafana time window by selecting “use dashboard time” in Skedler. You do not have to worry about missing or skipping any reports.

Dashboard Timezone

Support for fiscal year time window in Grafana dashboards. 

Grafana 8.2  has the option of the configurable fiscal year in the time picker. This option enables fiscal quarters as time ranges for business-focused and executive dashboards. Skedler now supports this feature too!

Fiscal Time Year Window

Added support for Outlook SMTP

Skedler now supports Outlook. So you can set up Outlook as your notification channel in your Skedler account.

Outlook SMTP

These are just some of the new features of Skedler. For more details on these features, do check out our release notes.

If you would like to stay updated on the latest release news or know about upcoming features, please feel free to reach out to the team and keep an eye out for our monthly newsletters.

Everything You Need to Know about Grafana

What is Grafana?

According to GrafanaLabs, Grafana is an open-source visualization and analytics software. No matter where your data is stored, it can be queried, visualized, and explored. In plain English, it provides you with tools to turn your time-series database (TSDB) data into beautiful graphs and visualizations.

Why do companies use Grafana?

Companies use Grafana to monitor their infrastructure and log analytics, predominantly to improve their operational efficiency. Dashboards make tracking users and events easy as it automates the collection, management, and viewing of data. Product leaders, security analysts, and developers use this data to guide their decisions. Studies show companies that rely on database analytics and visualization tools like Grafana are far more profitable than their peers.

Why is Grafana important?

Grafana shows teams and companies what their users really do, not just what they say they do. These are known as revealed behaviors. Users aren’t very adept at predicting their own futures. Having analytics allows tech teams to dig deeper than human-error-prone surveys and monitoring.

Grafana makes that data useful again by integrating all data sources into one single organized view

What Is a Grafana Dashboard?

A Grafana dashboard supports multiple panels in a single grid. You can visualize results from multiple data sources simultaneously. It is a powerful open-source analytical and visualization tool that consists of multiple individual panels arranged in a grid. The panels interact with configured data sources including (but not limited to) AWS CloudWatch, Microsoft SQL server, Prometheus, MySQL, InfluxDB, and many others.

Grafana supports a huge list of data sources including (but not limited to) AWS CloudWatch, Microsoft SQL server, Prometheus, MySQL, InfluxDB, and many others.

What features does Grafana provide?

The tools that teams actually use to uncover insights vary from organization to organization. The following are the most common (and useful) features they might expect of a data analytics/visualization tool like Grafana.

Common Grafana features:

  • Visualize: Grafana has a plethora of visualization options to help you understand your data from graphs to histograms, you have it all.
  • Alerts: Grafana lets you define thresholds visually, and get notified via Slack, PagerDuty, and more
  • Unify: You can bring your data together to get better context. Grafana supports dozens of databases, natively.
  • Open-Source: It’s completely open source. You can use Grafana Cloud, or easily install on any platform.
  • Explore Logs: Using label filters you can quickly filter and search through the laundry list of logs.
  • Display dashboards: Visualize data with templated or custom reports.Create and Share reports:
  • Create and Share reports: Create and share reports to your customers and stakeholders. This feature is not available in the open-source version. You can upgrade to avail it. 

Check out these 3 best Grafana reporting tools here

How to use Grafana

All data visualization platforms are built around two core functions that help companies answer questions about users and events:

  • Tracking data: Capturing visits, events, and monitoring actions through logs
  • Analyzing data: Visualizing data through dashboards and reports.

With data that’s been tracked, captured, and organized, companies are free to analyze:

  • What actions are users taking on the device, network, etc.?
  • The typical behavior flow that users take through our network or app
  • Opportunities to reduce SLA churn

and more.

The answers they receive arm them with statistically valid facts upon which to base security and operational decisions. Grafana is also commonly used to monitor synthetic metrics.

What are Synthetic Metrics?

Synthetic metrics are a collection of multi-stage steps required to complete an API call or transaction.

A set of metrics for an API call would contain:

  • Time to connect to API (connect latency)
  • Duration of request (response latency)
  • Size of response payload
  • Result Code of request (200, 204, 400, 500, etc)
  • Success/Failure state of the request

From there, teams typically graduate to proving or disproving hypotheses. For instance, a patch management solution provider/user may get the following questions addressed — “When is the best time to patch all the systems? Which are the unpatched systems in the network? What are the most vulnerable devices in a network etc.. Over time, teams build up a repository of data-backed evidence which allows them to create positive feedback loops. That is, the more data teams get back from Grafana, the more they can iterate their operations.

Getting started with Grafana is easy — Install Grafana Locally > Configure your data source > Create your first dashboard

What Are Some of the Real-World Industry Use Cases of Grafana?

As mentioned by 8bitmen.com, Grafana dashboards are deployed all over the industry be it Gaming, IoT, FinTech or E-Comm.

StackOverflow used the tool to enable their developers & site reliability teams to create tailored dashboards to visualize data & optimize their server performance.

Digital Ocean uses Grafana to share visualization data between their teams & have in place a common visual data-sharing platform.

What about Grafana reporting?

Grafana allows companies to fully understand the Hows and Whats of users/events with respect to their infrastructure or network. It is especially useful for security analytics teams so they can track events and users’ digital footprints to see what they are doing inside their network. Analytics is a critical piece of modern SecOps and DevOps as most apps and websites aren’t designed to run detailed reports or visualizations on themselves. Without proper visualizations, the data they collect is often inconsistent and improperly formatted (known as unstructured data). Grafana makes that data useful again by integrating all data sources into one single organized view.

The data has to be translated into meaningful reports and shared among the stakeholders. What if you could just use a tool to take care of this task? Skedler is a report automation tool that can automate your Grafana reports. It can create, share and distribute customized reports to all of your stakeholders, all without a single line of code.

Don’t you want to read more about Grafana reporting? Well, we have just the blog from you. Click here and check it out.

Kibana Single Sign-On with OpenId Connect and Azure Active Directory

Introduction

Open distro supports OpenID so you can seamlessly connect your Elasticsearch cluster with Identity Providers like Azure AD, Keycloak, Auth0, or Okta. To set up OpenID support, you just need to point Open distro to the metadata endpoint of your provider, and all relevant configuration information is imported automatically. In this article, we will implement a complete OpenID Connect setup including Open distro for Kibana Single Sign-On.

What is OpenID Connect?

OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.

OpenID Connect allows clients of all types, including Web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users. The specification suite is extensible, allowing participants to use optional features such as encryption of identity data, the discovery of OpenID Providers, and session management, when it makes sense for them.

Configuring OpenID Connect in Azure AD

Next, we will set up an OpenID Connect client application in Azure AD which we will later use for Open Distro for Elasticsearch Kibana Single Sign-On. In this post, we will just describe the basic steps.

Adding an OpenID Connect client application

Our first step is, we need to register an application with the Microsoft identity platform that supports OpenID Connect. Please refer to the official documentation.

Login to azure ad and open the Authentication tab in-app registrations and enter the redirect URL as https://localhost:5601/auth/openid/login and save it.

redirect URL – https://localhost:5601/auth/openid/login

Besides the client ID, we also need the client secret in our Open Distro for elasticsearch Kibana configuration. This is an extra layer of security. An application can only obtain an id token from the IdP if it provides the client secret. In Azure AD you can find it under the Certificates & secrets tab of the client settings.

Connecting OpenDistro with Azure AD

For connecting Open Distro with Azure AD we need to set up a new authentication domain with type openid in config.yml. The most important information we need to provide is the Metadata Endpoint of the newly created OpenID connect client. This endpoint provides all configuration settings that Open Distro needs. The URL of this endpoint varies from IdP to IdP. In Azure AD the format is:

openId end point IDP – https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/v2.0/.well-known/openid-configuration

Since we want to connect Open Distro for Elasticsearch Kibana with Azure AD, we also add a second authentication domain which will use the internal user database. This is required for authenticating the internal Kibana server user. Our config.yml file now looks like:

authc: 

          basic_internal_auth_domain: 

              http_enabled: true 

              transport_enabled: true 

              order: 0 

              http_authenticator: 

                 type: “basic” 

                 challenge: false 

              authentication_backend: 

                 type: “internal” 

          openid_auth_domain: 

              enabled: true 

              order: 1 

              http_authenticator: 

                 type: openid 

                 challenge: false 

                 config: 

                     subject_key: preferred_username 

                     roles_key: roles 

                     openid_connect_url: https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-

xxxxxxxx/v2.0/.well-known/openid-configuration 

              authentication_backend: 

                   type: noop

Adding users and roles to Azure AD

While an IDP can be used as a federation service to pull in user information from different sources such as LDAP, in this example we use the built-in user management. We have two choices when mapping the Azure AD users to Open Distro roles. We can do it by username, or by the roles in Azure AD. While mapping users by name is a bit easier to set up, we will use the Azure AD roles here.

With the default configuration, two appRoles are created, skedler_role and guidanz_role, which can be viewed by choosing the App registrations menu item within the Azure Active Directory blade, selecting the Enterprise application in question, and clicking the Manifest button

A manifest is a JSON object that looks similar to:

“appId”: “xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx”,

“appRoles”: [

  {

   “allowedMemberTypes”: [

    “User”

   ],

   “description”: “Skedler with administrator access”,

   “displayName”: “skedler_role”,

   “id”: “xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx”,

   “isEnabled”: true,

   “value”: “skedlerrole”

  },

           {

   “allowedMemberTypes”: [

    “User”

   ],

   “description”: “guidanz with readonly access”,

   “displayName”: “guidanz_role”,

   “id”: “xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx”,

   “isEnabled”: true,   

   “value”: “guidanzrole”

  },

         ], … etc.  

 }

There are many different ways we might decide to map how users within AAD will be assigned roles within Elasticsearch, for example, using the tenantid claim to map users in different directories to different roles, using the domain part of the name claim, etc.

With the role OpenID connect token attribute created earlier, however, the appRole to which an AAD user is assigned will be sent as the value of the Role Claim within the OpenID connect token, allowing:

  • Arbitrary appRoles to be defined within the manifest
  • Assigning users within the Enterprise application to these roles
  • Using the Role Claim sent within the SAML token to determine access within Elasticsearch.

For the purposes of this post, let’s define a Superuser role within the appRoles:

{

  “appId”: “<guid>”,

  “appRoles”: [

    {

      “allowedMemberTypes”: [

        “User”

      ],

      “displayName”: “Superuser”,

      “id”: “18d14569-c3bd-439b-9a66-3a2aee01d14d”,

      “isEnabled”: true,

      “description”: “Superuser with administrator access”,

      “value”: “superuser”

    },

    … other roles

  ],

  … etc.

And save the changes to the manifest:

Configuring OpenID Connect in Open Distro for Kibana

The last part is to configure OpenID Connect in Open Distro for Kibana. Configuring the Kibana plugin is straight-forward: Choose OpenID as the authentication type, and provide the Azure AD metadata URL, the client name, and the client secret. Please refer to the official documentation.

Activate OpenID Connect by adding the following to kibana.yml:

opendistro_security.auth.type: “openid”

opendistro_security.openid.connect_url: “https://login.microsoftonline.com/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx/v2.0/.well-known/openid-configuration”

opendistro_security.openid.client_id: “xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx” 

opendistro_security.openid.client_secret: “xxxxxxxxxxxxxxxxxxxxxxxxxxx”

opendistro_security.openid.base_redirect_url: “https://localhost:5601”

Done. We can now start Open Distro for Kibana and enjoy Single Sign-On with Azure AD! If we open Kibana, we get redirected to the login page of Azure AD. After providing username and password, Kibana opens, and we’re logged in.

Summary

OpenID Connect is an industry-standard for providing authentication information. Open Distro for Elasticsearch and their Open Distro for Kibana plugin support OpenID Connect out of the box, so you can use any OpenID compliant identity provider to implement Single Sign-On in Kibana. These IdPs include Azure AD, Keycloak, Okta, Auth0, Connect2ID, or Salesforce.

Reference

If you wish to have an automated reporting application, we recommend downloading  Skedler Reports.

Email PDF, CSV, Excel Reports from Elastic Stack v7.4 and Grafana v6.4.3

We are excited to announce the general availability of Skedler Reports V4.6. Skedler Reports can now be used to generate PDF, CSV, Excel reports from version 7.4.x of Elastic Stack and version 6.4.3 of Grafana.

Skedler Reports version 4.6 is available now on our official download page. Here’s everything you need to know about the new Skedler Reports v4.6.

What’s New With the Skedler Reports v4.6

  • Support for the latest version of Elastic Stack and Grafana. Skedler Reports v4.6 works with the following versions:
    • Elastic stack v7.4.0
    • Grafana v6.4.3

Be sure to check out the 4.6 release highlights for the Skedler Reports for additional information. Or better yet, give it a try on Skedler Reports with a free 15-day trial. Let us know what you think on Twitter (@InfoSkedler) or in our discussion forums!

Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

An easy way to add reporting to Elasticsearch Kibana 7.x and Grafana 6.x on Kubernetes with Skedler Reports

There is a simple and effective way to add reporting for your Elasticsearch Kibana 7.x (including Open Distro for Elasticsearch) or Grafana 6.x applications that are deployed to Kubernetes. In this part of the article, you are going to learn how to deploy Skedler Reports for Elasticsearch Kibana and Grafana applications to Kubernetes with ease.

What is Kubernetes?

For those that haven’t ventured into container orchestration, you’re probably wondering what Kubernetes is. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes (“k8s” for short), was a project originally started at, and designed by Google, and is heavily influenced by Google’s large scale cluster management system, Borg. More simply, k8s gives you a platform for managing and running your applications at scale across multiple physical (or virtual) machines.

Kubernetes offers the following benefits:

  • Workload Scalability
  • High Availability
  • Designed for deployment

Deploying Skedler Reports to Kubernetes

If you haven’t already downloaded Skedler Reports, please download it from www.skedler.com.  Review the documentation to get started.   

Creating a K8s ConfigMap

Kubernetes ConfigMaps allows containerized application to become portable without worrying about configurations. Users and system components can store configuration data in ConfigMap. In Skedler Reports ConfigMaps can be used to store database connection string information such as datastore settings, port number, server information and files locations, log directory etc.

If Skedler Reports defaults are not enough, one may want to customize reporting.yml through a ConfigMap. Please refer to Reporting.yml and ReportEngineOptions Configuration for all available attributes.

1. Create a file called skedler-configmap.yaml in your project directory and paste the following

skedler-configmap.yaml:

apiVersion: v1

kind: ConfigMap

metadata:

  name: skedler-config

  labels:

   app:  skedler

data:

  reporting.yml: |

    —

    #**************BASIC SETTINGS************

    #port: 3000

    #host: “0.0.0.0”

    #*******SKEDLER SECURITY SETTINGS*******

    #skedler_anonymous_access: true

    #Skedler admin username skedlerAdmin

    #Allows you to change Skedler admin password. By default the admin password is set to skedlerAdmin

    #skedler_password: skedlerAdmin

    #*******INDEX SETTINGS***********

    #skedler_index: “.skedler”

    ui_files_location: “/var/lib/skedler/uifiles”

    log_dir: “/var/lib/skedler/log”

    #****************DATASTORE SETTINGS*************

    ####### ELASTICSEARCH DATASTORE SETTINGS ########

    # The Elasticsearch instance to use for all your queries.

    #elasticsearch_url: “http://localhost:9200”

    #skedler_elasticsearch_username: user

    #skedler_elasticsearch_password: pass

    ######## DATABASE DATASTORE SETTINGS ############

    #You can configure the database connection by specifying type, host, name, user and password

    #as separate properties or as on string using the url properties.

    #Either “mysql” and “sqlite”, it’s your choice

    #database_type: “mysql”

    #For mysql database configuration

    #database_hostname: 127.0.0.1

    #database_port: 3306

    #database_name: skedler

    #database_history_name: skedlerHistory

    #database_username: user

    #database_password: pass

    #For sqlite database configuration only, path relative to data_path setting

    #database_path: “/var/lib/skedler/skedler.db”

    #database_history_path: “/var/lib/skedler/skedlerHistory.db”

2. To deploy your configmap, execute the following command,

kubectl create -f skedler-configmap.yaml

2. To deploy your configmap, execute the following command,

kubectl create -f skedler-configmap.yaml

Creating Deployment and Service

To deploy our Skedler Reports, we’re going to use the “skedler-deployment” pod type. A deployment wraps the functionality of Pods and Replica Sets to allow you to update your application. Now that our Skedler Reports application is deployed, we need a way to expose it to traffic from outside the cluster. To this, we’re going to add a Service inside the skedler-deployment.yaml file. We’re going to open up a NodePort directly to our application on port 30000.

  1. Create a file called skedler-deployment.yaml in your project directory and paste the following

skedler-deployment.yaml:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: skedler-reports

  labels:

    app: skedler

spec:

  replicas: 1

  selector:

    matchLabels:

      app: skedler

  template:

    metadata:

      labels:

        app: skedler

    spec:

      containers:

      – name: skedler

        image: skedler/reports:latest

        imagePullPolicy: Always

        command: [“/opt/skedler/bin/skedler”]

        ports:

        – containerPort: 3000

        volumeMounts:

        – name: skedler-reports-storage

          mountPath: /var/lib/skedler

        – name: skedler-config

          mountPath: /opt/skedler/config/reporting.yml

          subPath: reporting.yml

      volumes:

      – name: skedler-reports-storage

      – name: skedler-config

        configMap:

          name: skedler-config

apiVersion: v1

kind: Service

metadata:

  name: skedler

  labels:

    app: skedler

spec:

  selector:

    app: skedler

  ports:

  – port: 3000

    protocol: TCP

    nodePort: 30000

  type: LoadBalancer

2. For deployment, execute the following command,

kubectl create -f skedler-deployment.yaml

3. To get your deployment with kubectl, execute the following command,

kubectl get deployments

4. We can get the service details by executing the following command,

kubectl get services

Now, Skedler will be deployed in 30000 port.

Accessing Skedler

Skedler Reports can be accessed from the following URL, http://<hostIP>:30000

To learn more about creating reports, visit Skedler documentation site.

Summary

This blog was a very quick overview of how to get a Skedler Reports for Elasticsearch Kibana 7.x and Grafana 6.x application up and running on Kubernetes with the least amount of configuration possible. Kubernetes is an incredibly powerful platform that has many more features than we used today.  We hope that this article gave a headstart and saved you time.

Skedler v4.1: Next Generation Reporting for Elasticsearch Kibana 7.0 and Grafana 6.1 is here

We are excited to announce that we have just released version 4.1 of Skedler Reports!  

[button title=”Download Skedler 4.1 Now” icon=”” icon_position=”” link=”https://www.skedler.com/download/” target=”_blank” color=”#800080″ font_color=”#000″ large=”0″ class=”v4download” download=”” onclick=””]

Self Service Reporting Solution for Elasticsearch Kibana 7.0 and Grafana 6.1

We understand that your stakeholders and customers need intuitive and flexible options to save time in receiving the data that matters to them and we’ve achieved exactly that with the release of Skedler 4.1.  The newly enhanced UI offers a delightful user experience for creating and scheduling reports from your Elasticsearch Kibana 7.0 and Grafana 6.1 .

[video_embed video=”4flSLj5q1yk” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

Multi-Tenancy Capabilities

If you are a service provider, you need a simple and automated way to provide different groups of users (i.e. “tenants”) with access to different sets of data. Skedler 4.1’s powerful and secure multi-tenancy capabilities will now allow you to send reports to your customers from your multi-tenant analytics application within minutes.  Supported with Search Guard, Open Distro & X-Pack.

Intuitive and Mobile Ready Reports

Skedler 4.1 will now allow you to produce high-resolution HTML reports from Elasticsearch Kibana and Grafana that will make it easy and convenient for your end users to access to critical data through their mobile devices and email clients. No more cumbersome and large PDF attachments.

[video_embed video=”soFITSdyDdE” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

The latest release also includes:

  • Support for the latest and greatest version of Elastic Stack and Grafana. Skedler 4.1 supports the following versions:
    • Elastic stack 6.7 and 7.0
    • Grafana 6.1.x
    • Open distro for Elasticsearch 6.7 and 7.0.  

Please continue to send us feedback for what new capabilities you’d like to see in the future by reaching out to us at hello@skedler.com

Skedler Update: Version 3.9 Released

Skedler Update: Version 3.9 Released

Here’s everything you need to know about the new Skedler v3.9. Download the update now to take advantage of its new features for both Skedler Reports and Alerts.

What’s New With Skedler Reports v3.9

  • Support for:
    • ReadOnlyRest Elasticsearch/Kibana Security Plugin.
    • Chromium web browser for Skedler report generation.
    • Report bursting in Grafana reports if the Grafana dashboard is set with Template Variables.
    • Elasticsearch version 6.4.0 and Kibana version 6.4.0.
  • Ability to install Skedler Reports through Debian and RPM packages.
  • Simplified installation levels of Skedler Reports here.
  • Upgraded license module
    • NOTE: License reactivation is required when you upgrade Skedler Reports from the older version to the latest v3.8. Refer to this URL to reactivate the Skedler Reports license key.
    • Deactivation of Skedler license key in UI

What’s New With Skedler Alerts v3.9

  • Support for:
    • Installing Skedler Alerts via Debian and RPM packages.
    • GET method type in Webhook.
    • Elasticsearch 6.4.0.
  • Simplified installation levels of Skedler. Refer to this URL for installation guides.
  • Upgraded license module:
    • NOTE: License reactivation is required when you upgrade Skedler Alerts from the older version to the latest v3.8. Refer to this URL to reactivate the Skedler Alerts license key.
  • Deactivation of Skedler Alerts license key in UI

 

Get Skedler Reports

Download Skedler Reports

Get Skedler Alerts

Download Skedler Alerts

 

Skedler Review: The Report Scheduler Solution for Kibana

 Matteo Zuccon is a software developer with a passion for web development (RESTFull services, JS Frameworks), Elasticsearch, Spark, MongoDB, and agile processes. He runs whiletrue.run. Follow him on Twitter @matteo_zuccon

With Kibana you can create intuitive charts and dashboards. Since Aug 2016 you can export your dashboards in a PDF format thanks to Reporting. With Elastic version, 5 Reporting has been integrated into X-Pack for the Premium and Enterprise subscriptions.

Recently I tried Skedler, an easy to use report scheduling and distribution application for Kibana that allows you to centrally schedule and distribute Kibana Dashboards and Saved Searches as hourly/daily/weekly/monthly PDF, XLS or PNG reports to various stakeholders.

Skedler is a standalone app that allows you to utilize a new dashboard where you can manage Kibana reporting tasks (schedule, dashboards and saved search). Right now there are four different price plans (from free to premium edition).

In this post I am going to show you how to install Skedler (on Ubuntu) and how export/schedule a Kibana dashboard.

Install Pre-requisites

sudo apt-get -y update

sudo apt-get install -y libfontconfig1 libxcomposite1 libxdamage1 libcups2 libasound2 libxrandr2 libxfixes3 libnss3 libnss3-dev libxkbcommon-dev libgbm-dev libxshmfence-dev libatk1.0-0 libatk-bridge2.0-0 libgtk-3-0 gcc make

Install .deb package

Download the latest skedler-xg.deb file and extract it.  If you have previously installed the .deb package, remove it before installing the latest version.

curl -O https://skedler-v5-releases.s3.amazonaws.com/downloads/latest/skedler-xg.deb

sudo dpkg -i skedler-xg.deb

Install .tar.gz package

Download the latest skedler-xg.tar.gz file and extract it.

curl -O https://skedler-v5-releases.s3.amazonaws.com/downloads/latest/skedler-xg.tar.gz

sudo tar xzf skedler-xg.tar.gz

cd skedler-xg

sudo chmod -R 777 *

Configure your options for Skedler v5

Skedler Reports has a number of configuring options that can be defined in its reporting.yml file (located in the skedler folder).  In the reporting.yml file, you can configure options to run Skedler in an air-gapped environment, change the port number, define the hostname, change the location for the Skedler database, and log files.

Read more about the reporting.yml configuration options.

 

Start Skedler for .deb

To start Skedler, the command is:

sudo service skedler start

To check status, the command is:

sudo service skedler status

To stop Skedler. the command is:

sudo service skedler stop

Start Skedler for .tar.gz

To run Skedler manually, the command is:

sudo bin/skedler

To run Skedler as a service, the commands are:

sudo ./install_as_service.sh

To start Skedler, the command is:

sudo service skedler start

To check status, the command is:

sudo service skedler status

To stop Skedler. the command is:

sudo service skedler stop

Access Skedler Reports

The default URL for accessing Skedler Reports v5 is:

http://localhost:3005/

If you had made configuration changes in the reporting.yml, then the Skedler URL is of the following format:

http://<hostname or your domainurl>:3005

or

http://<hostname or your domain url>:<port number>

 

Login to Skedler Reports

By default, you will see the Create an account UI.  Enter your email to create an administrator account in Skedler Reports. Click on Continue.

 

Note: If you have configured an email address and password in reporting.yml, then you can skip the create account step and proceed to Login.

 

An account will be created and you will be redirected to the Login page.

 

Sign in using the following credentials:

Username: <your email address>   (or the email address you configured in reporting.yml)Password: admin   (or the password you configured in reporting.yml)

 

Click Sign in.

 

You will see the Reports Dashboard after logging in to the skedler account.   

In this post, I demonstrated how to install and configure Skedler and how to create a simple schedule for our Kibana dashboard. My overall impression of Skedler is that it is a powerful application to use side-by-side with Kibana that allows you to deliver reports directly to your stakeholders.

These are the main benefits that Skedler offers:

  • It’s easy to install
  • Linux, Windows  and Mac OS support (it runs on Node.js server)
  • Reports are generated locally (your data isn’t sent to the cloud or Skedler servers)
  • Competitive price plans
  • Supports Kibana and Grafana.
  • Automatically discovers your existing Kibana Dashboards and Saved Searches (so you can easily use Skedler in any environment with no new stack installation needed)
  • It lets you centrally schedule and manage who gets which reports and when they get them
  • Allows for hourly, weekly, monthly, and yearly schedules
  • Generates XLS and PNG reports besides PDF as opposed to Elastic Reporting that only supports PDF.
  • I strongly recommend that you try Skedler because it can help you to automatically deliver reports to your stakeholders and it integrates within your ELK environment without any modification to your stack.

Click here for free trial option.

You can find more resources about Skedler here:

Copyright © 2023 Guidanz Inc
Translate »