How to Extract Business Insights from Audio Using AWS Transcribe, AWS Comprehend and Elasticsearch – Part 2 of 2

In the previous post, we presented a system architecture to convert audio and voice into written text with AWS Transcribe, extract useful information for quick understanding of content with AWS Comprehend, index this information in Elasticsearch 6.2 for fast search and visualize the data with Kibana 6.2.

In this post we are going to see how to implement the previosly described architecture.
The main steps performed in the process are:

  1. Configure S3 Event Notification
  2. Consume messages from Amazon SQS queue
  3. Convert the recording to text with AWS Transcribe
  4. Entities/key phrases/sentiment detection using AWS Comprehend
  5. Index to Elasticsearch 6.2
  6. Search in Elasticsearch by entities/sentiment/key phrases/customer
  7. Visualize, report and monitor with Kibana dashboards
  8. Use Skedler and Alerts for reporting, monitoring and alerting

1. Configure S3 Event Notification

When a new recording has been uploaded to the S3 bucket, a message will be sent to an Amazon SQS queue.

You can read more information on how to configure the S3 Bucket and read the queue programmatically here: Configuring Amazon S3 Event Notifications.

This is how a message notified from S3 looks. The information we need are the object key and bucket name.

{

  “Records”: [

    {

      “eventVersion”: “2.0”,

      “eventSource”: “aws:s3”,

      “eventName”: “ObjectCreated:Put”,

      “requestParameters”: { “sourceIPAddress”: “xxx.xxx.xx.xx” },

      “s3”: {

        “s3SchemaVersion”: “1.0”,

        “configurationId”: “ev”,

        “bucket”: {

          “name”: “your_bucket”,

          “arn”: “arn:aws:s3:::your_bucket”

        },

        “object”: {

          “key”: “my_new_recording.mp3”,

          “size”: 567,

        }

      }

    }

  ]

}

2. Consume messages from Amazon SQS queue

Now that the S3 bucket has been configured, a notification will be sent to the SQS queue when a recording is uploaded to the bucket. We are going to build a consumer that will perform the following operations:

  • Start a new AWS Transcribe transcription job
  • Check the status of the job
  • When the job is done, perform text analysis with AWS Comprehend
  • Index the results to Elasticsearch

With this code you can read the messages from a SQS queue, fetch the bucket and key (used in S3) of the uploaded document and use them to invoke AWS Transcribe for the speech to text task:

import boto3 as boto3

import time

import json

AWS_ACCESS_KEY = ‘youAWS_ACCES_KEY’

AWS_SECRET_ACCESS_KEY = ‘youAWS_SECRET_ACCESKEY’

AWS_REGION = ‘yourAWS_SUBSCRIBTION_REGION’

SQS_QUEUE_NAME = ‘SQS_QUEUE_NAME’

sqs_resource_connection = boto3.resource(

    ‘sqs’,

    aws_access_key_id = AWS_ACCESS_KEY,

    aws_secret_access_key = AWS_SECRET_ACCESS_KEY,

    region_name = AWS_REGION

)

queue = sqs_resource_connection.get_queue_by_name(QueueName = SQS_QUEUE_NAME)

while True:

    messages = queue.receive_messages(MaxNumberOfMessages = 1, WaitTimeSeconds  = 5)

    for message in messages:

        body = json.loads(message.body)

        key_name = body[‘Records’][0][‘s3’][‘object’][‘key’]

        bucket_name= body[‘Records’][0][‘bucket’][‘name’]

        object_url = f’https://s3.amazonaws.com/{bucket_name}/{key_name}’

        # Start the AWS Transcribe transcription job

        # Check job status 

        # Run text analysis

        # Index to Elasticsearch

        message.delete()

    time.sleep(10)

3. AWS Transcribe – Start Transcription Job

Once we have consumed a S3 message and we have the url of the new uploaded document, we can start a new transcription job (asynchronous) to perform the speech to text task.

We are going to use the start_transcription_job method.

It takes a job name, the S3 url and the media format as parameters.

To use the AWS Transcribe API be sure that your AWS Python SDK – Boto3 is updated.

pip install boto3 –upgrade

import boto3 

client_transcribe = boto3.client(

    ‘transcribe’,

    region_name=’us-east-1′ # service still in preview

)

def start_transcribe_job(job_name, media_file_uri):

    response = client_transcribe.start_transcription_job(

        TranscriptionJobName=job_name,

        LanguageCode=’en-US’, # TODO: use parameter when more languages will be available

        MediaFormat=’mp3′, # feel free to change it

        Media={

            ‘MediaFileUri’: media_file_uri

        }

    )

    return response[‘TranscriptionJob’][‘TranscriptionJobName’]

Read more details here: Python Boto3 AWS Transcribe.

3a. AWS Transcribe – Check Job Status

Due to the asynchronous nature of the transcription job (it could take a while depending on the length and complexity of your recordings), we need to check the job status.

Once the stauts is “COMPLETED” we can retrieve the result of the job (the text converted from the recording).

def get_transcribe_job_response(job_name):

    job_status = ‘IN_PROGRESS’

    while job_status == ‘IN_PROGRESS’:

        job = client_transcribe.get_transcription_job(

            TranscriptionJobName=job_name

        )

        job_status = job[‘TranscriptionJob’][‘TranscriptionJobStatus’]

        time.sleep(5)

    if job_status == ‘FAILED’:

        raise Exception(f’Job {job_name} failed’)

    elif job_status == ‘COMPLETED’:

        job_result = job[‘TranscriptionJob’][‘Transcript’][‘TranscriptFileUri’]

        with urllib.request.urlopen(job_result) as url:

            return json.loads(url.read().decode())[‘results’][‘transcripts’][0]

Here’s how the output looks:

{

    “jobName”: “myFirstJob”,

    “accountId”: “1111111”,

    “results”: {

        “transcripts”: [{

            “transcript”: “welcome back”

        }],

        “items”: [{

            “start_time”: “0.990”,

            “end_time”: “1.300”,

            “alternatives”: [{

                “confidence”: “0.9999”,

                “content”: “welcome”

            }],

            “type”: “pronunciation”

        }, {

            “start_time”: “1.300”,

            “end_time”: “1.440”,

            “alternatives”: [{

                “confidence”: “1.0000”,

                “content”: “back”

            }],

            “type”: “pronunciation”

        }]

    }

}

4. AWS Comprehend – Text Analysis

We have converted our recording to text. Now, we can run the text analysis using AWS Comprehend. The analysis will extract the following elements from the text:

  • Sentiment
  • Entities
  • Key phreses

import boto3

client_comprehend = boto3.client(

    ‘comprehend’,

    region_name = ‘yourRegion’

)

def comprehend_analysis(plain_text):

    # Max Bytes size supported by AWS Comprehend

    # https://boto3.readthedocs.io/en/latest/reference/services/comprehend.html#Comprehend.Client.detect_dominant_language

    # https://boto3.readthedocs.io/en/latest/reference/services/comprehend.html#Comprehend.Client.detect_entities

    while sys.getsizeof(plain_text) > 5000:

        plain_text = plain_text[:-1]

    dominant_language_response = client_comprehend.detect_dominant_language(

        Text=plain_text

    )

    dominant_language = sorted(dominant_language_response[‘Languages’], key=lambda k: k[‘LanguageCode’])[0][‘LanguageCode’]

    if dominant_language not in [‘en’,’es’]:

        dominant_language = ‘en’

    response_entities = client_comprehend.detect_entities(

        Text=plain_text,

        LanguageCode=dominant_language

    )

    response_key_phrases = client_comprehend.detect_key_phrases(

        Text=plain_text,

        LanguageCode=dominant_language

    )

    response_sentiment = client_comprehend.detect_sentiment(

        Text=plain_text,

        LanguageCode=dominant_language

    )

    entites = list(set([x[‘Type’] for x in response_entities[‘Entities’]]))

    key_phrases = list(set([x[‘Text’] for x in response_key_phrases[‘KeyPhrases’]]))

    sentiment = response_sentiment[‘Sentiment’]

    return entites, key_phrases, sentiment

Read more details here: Python Boto3 AWS Comprehend.

5. Index to Elasticsearch

Given a recording, we now have a set of elements that characterize it. Now, we want to index this information to Elasticsearch 6.2. I created a new index called audioarchive and a new type called recording.

The recording type we are going to create will have the following properties:

  • customer id: the id of the customer who submitted the recording (substring of the s3 key)
  • entities: the list of entities detected by AWS Comprehend
  • key phrases: the list of key phrases detected by AWS Comprehend
  • sentiment: the sentiment of the document detected by AWS Comprehend
  • s3Location: link to the document in the S3 bucket

Create the new index:

curl -XPUT ‘esHost:9200/audioarchive/’ -H ‘Content-Type: application/json’ -d ‘{

    “settings” : {

        “index” : {

            “number_of_shards” : 1, 

            “number_of_replicas” : 0

        }

    }

}’

Add the new mapping:

curl -X PUT “esHost:9200/audioarchive/recording/_mapping” -H ‘Content-Type: application/json’ -d ‘{

    “recording” : {

        “properties” : {

“customerId” : { “type” : “keyword” },

            “entities” : { “type” : “keyword” },

“keyPhrases” : { “type” : “keyword” },

“sentiment” : {“type” : “keyword”},

            “s3Location” : { “type” : “text”}

        }

}

}’

We can now index the new document:

from elasticsearch import Elasticsearch

es_client = Elasticsearch(‘esHost’)

def create_es_document(customer_id, entites, sentiment, key_phrases, s3_location):

    return {

        “customerId”: customer_id,

        “entities”: entites,

        “sentiment”: sentiment,

        “keyPhrases”: key_phrases,

        “s3Location”: s3_location

    }

def index_to_es(document, index_name, type):

    es_client.index(index=index_name, doc_type=type, body=document)

doc = create_es_document(1, [‘entity1’, ‘entity2’], ‘positive’, [‘k1′,’k2’], ‘https://your_bucket.s3.amazonaws.com/your_object_key’

index_to_es(doc, INDEX_NAME, TYPE_NAME)

6. Search in Elasticsearch by entities, sentiment, key phrases or customer

Now that we indexed the data in Elasticsearch, we can perform some queries to extract business insights from the recordings.

Examples:

Number of positive recordins that contains the _feedback_ key phrases by customer.

POST audioarchive/recording/_search?size=0

{

  “aggs”: {

    “genres”: {

      “terms”: {

        “field”: “customerId”

      }

    }

  },

  “query”: {

    “bool”: {

      “must”: [

        {

          “match”: {

            “sentiment”: “Positive”

          }

        },

        {

          “match”: {

            “keyPhrases”: “feedback”

          }

        }

      ]

    }

  }

}

Number of recordings by sentiment.

POST audioarchive/recording/_search?size=0

{

  “aggs”: {

    “genres”: {

      “terms”: {

        “field”: “sentiment”

      }

    }

  }

}

What are the main key phares in the nevative recordings?

POST audioarchive/recording/_search?size=0

{

  “aggs”: {

    “genres”: {

      “terms”: {

        “field”: “keyPhrases”

      }

    }

  },

  “query”: {

    “bool”: {

      “should”: [

        {

          “match”: {

            “sentiment”: “Negative”

          }

        },

        {

          “match”: {

            “sentiment”: “Mixed”

          }

        }

      ]

    }

  }

}

7. Visualize, Report, and Monitor with Kibana dashboards and search

With Kibana you can create a set of visualizations/dashboards to search for recording by customer, entities and to monitor index metrics (like number of positive recordings, number of recordings by customer, most common entities/key phreases in the recordings).

Examples of Kibana dashboards:

Percentage of documents by sentiment, percentage of positive feedback and key phrases:

kibana report dashboard

Number of recordings by customers, and sentiment by customers:

kibana report dashboard

Most common entities and heat map sentiment-entities:

kibana report

8. Use Skedler Reports and Alerts to easily monitor data

Using Skedler, an easy to use report scheduling and distribution application for Elasticsearch-Kibana-Grafana, you can centrally schedule and distribute custom reports from Kibana Dashboards and Saved Searches as hourly/daily/weekly/monthly PDF, XLS or PNG reports to various stakeholders. If you want to read more about it: Skedler Overview.

[video_embed video=”APEOKhsgIbo” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

If you want to get notified when something happens in your index, for example, a certain entity is detected or the number of negative recording by customer reaches a certain value, you can use Skedler Alerts. It simplifies how you create and manage alert rules for Elasticsearch and it provides a flexible approach to notification (it supports multiple notifications, from Email to Slack and Webhook).

Conclusion

In this post we have seen how to use Elasticsearch as the search engine for customer recordings. We used the speech to text power of AWS Transcribe to convert our recording to text and then AWS Comprehend to extract semantic information from the text. Then we used Kibana to aggregate the data and create useful visualizations and dashboards. Then scheduled and distribute custom reports from Kibana Dashboards using Skedler Reports.

Environment configurations:

  • Elasticsearch and Kibana 6.2
  • Python 3.6.3 and AWS SDK Boto3 1.6.3
  • Ubuntu 16.04.3 LTS
  • Skedler Reports & Alerts

Extract business insights from audio using AWS Transcribe, AWS Comprehend and Elasticsearch – Part 1

Many businesses struggle to gain actionable insights from customer recordings because they are locked in voice and audio files that can’t be analyzed. They have a gold mine of potential information from product feedback, customer service recordings and more, but it’s seemingly locked in a black box.

Until recently, transcribing audio files to text has been time-consuming or inaccurate.
Speech to text is the process of converting speech input into digital text, based on speech recognition. The best solutions were either not accurate enough, too expensive to scale or didn’t play well with legacy analysis tools. With Amazon’s introduction of AWS Transcribe, that has changed.

In this two-part blog post, we are going to present a system architecture to convert audio and voice into written text with AWS Transcribe, extract useful information for quick understanding of content with AWS Comprehend, index this information in Elasticsearch 6.2 for fast search and visualize the data with Kibana 6.2.  In Part I, you can learn about the key components, architecture, and common use cases.  In Part II, you can learn how to implement this architecture.

We are going to analyze some customer recordings (complaints, product feedbacks, customer support) to extract useful information and answer the following questions:

  • How many positive recordings do I have?
  • How many customers are complaining (negative feedback) about my products?
  • Which is the sentiment about my product?
  • Which entities/key phrases are the most common in my recordings?

The components that we are going to use are the following:

  • AWS S3 bucket
  • AWS Transcribe
  • AWS Comprehend
  • Elasticsearch 6.2
  • Kibana 6.2
  • Skedler Reports and Alerts

System architecture:

This architecture is useful when you want to get useful insights from a set or audio/voice recording. You will be able to convert to text your recordings, extract semantic details from the text, perform fast search/aggregations on the data, visualize and report the data.

Examples of common applications are:

  • transcription of customer service calls
  • generation of subtitles on audio and video content
  • conversion of audio file (for example podcast) to text
  • search for keywords or inappropriate words within an audio file

 

AWS Transcribe

At the re:invent2017 conference, Amazon Web Services presented Amazon Transcribe, a new, machine learning – natural language processing – service.

Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech to text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Instead of AWS Transcribe, you can use similar services to perform speech to text analysis, like: Azure Bing Speech API or Google Cloud Speech API.

> The service is still in preview, watch the launch video here: AWS re:Invent 2017: Introducing Amazon Transcribe.

> You can read more about it here: Amazon Transcribe – Accurate Speech To Text At Scale.

 

AWS Comprehend

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. Amazon Comprehend identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is, and automatically organizes a collection of text files by topic. – AWS Service Page

AWS Comprehend and Elasticsearch

It analyzes text and tells you what it finds, starting with the language, from Afrikaans to Yoruba, with 98 more in between. It can identify different types of entities (people, places, brands, products, and so forth), key phrases, sentiment (positive, negative, mixed, or neutral), and extract key phrases, all from a text in English or Spanish. Finally, Comprehend’s topic modeling service extracts topics from large sets of documents for analysis or topic-based grouping. – Jeff Barr – Amazon Comprehend – Continuously Trained Natural Language Processing.

Instead of AWS Comprehend, you can use similar services to perform Natural Language Processing, like: Google Cloud Platform – Natural Language API or Microsoft Azure – Text Analytics API.
I prefer to use AWS Comprehend because the service constantly learns and improves from a variety of information sources, including Amazon.com product descriptions and consumer reviews – one of the largest natural language data sets in the world. This means it will keep pace with the evolution of language and it is fully integrated with AWS S3 and AWS Glue (so you can load documents and texts from various AWS data stores such as Amazon Redshift, Amazon RDS, Amazon DynamoDB, etc.).

Once you have a text file of the audio recording, you enter it into Amazon Comprehend for analysis of the sentiment, tone and other insights. Instead of AWS Comprehend, you can use similar services to perform Natural Language Processing, like: Google Cloud Platform – Natural Language API or Microsoft Azure – Text Analytics API.

> Here you can find an AWS Comprehend use case: How to Combine Text Analytics and Search using AWS Comprehend and Elasticsearch 6.0.

 

Conclusion

In this post we have seen a system architecture that performs the following:

  • Speech to text task – AWS Transcribe
  • Text analysis – AWS Comprehend
  • Index and fast search – Elasticsearch
  • Dashboard visualization – Kibana
  • Automatic Reporting and Alerting – Skedler Reports and Alerts

Amazon Transcribe and Comprehend can be powerful tools in helping you unlock the potential insights from voice and video recordings that were previously too costly to access. Having these insights makes it easier to understand trends in issues and consumer behavior, brand and product sentiment, Net Promoter Score, as well as product ideas and suggestions, and more.

In the next post (Part 2 of 2), you can see how to implement the described architecture.

Introducing Skedler Custom Reporting (Formally Report Designer) for Elasticsearch Kibana (ELK)

Give Me Some Real Reports!

When it comes to reporting for ELK,  users are frustrated with the expensive packs and do-it-yourself modules.   Reports from these approaches are rudimentary and nothing more than basic screen grabs of Kibana dashboards. They lack customization, charts get stretched, and visuals are laid out randomly based on the Kibana dashboard.  And if you need to generate large reports, you might as well forget about it since none of these solutions scale!  Users are craving for reports that deliver clear insights from their ELK based log/search/SIEM analytics applications right in their inbox.

Create Intuitive, Custom Reports with Data Stories

Today, we are pleased to announce the Skedler Reports Enterprise Edition (Formally Designer Edition) that offers organizations a new way to unleash the value of Elasticsearch (ELK) data. This innovative solution makes it easy to create custom reports that present the data in an intuitive fashion to the users.  With just a few clicks, you can design report templates, create data stories, and automate distribution of reports that enable users to make quick decisions.

See Skedler in Action

[video_embed video=”9kb0aU0cKmU” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

See a Sample Report

Custom Elasticsearch Kibana Report | Skedler Enterprise Edition (Formally Designer Edition)  from Skedler

Add Custom Reporting to Skedler Premier Edition

Skedler Reports Enterprise Edition (Formally Designer Edition) is available as a seamless add-on module to the Premier Edition.  It is designed for organizations that strive to deliver insightful data stories to users and empower them to make quick decisions.  Skedler Reports Enterprise Edition (Formally Designer Edition) is licensed separately and can be activated instantly with the appropriate license key.

Get a Demo of the Real Reporting for ELK Stack

The Skedler Reports Enterprise Edition (Formally Designer Edition) Preview is available starting today.  Schedule a demo to see the powerful custom reporting capabilities that Skedler offers.  Explore how you can deliver actionable custom ELK reports to users with Skedler.

GET A DEMO

 

Skedler v2.8.1: Add Reporting to Elasticsearch Kibana 5.4

We are excited to announce the availability of Skedler v2.8.1.  The latest update to the Skedler platform includes support for adding PDF, XLS, CSV Reports to Elasticsearch 5.4 and Kibana 5.4.   You can learn more about the Release here.

Try Skedler Free

Download Skedler v2.8.1 and try it  for free.  Let us know your feedback regarding Skedler and how we can help you meet your reporting requirements.

Skedler Review: The Report Scheduler Solution for Kibana

 Matteo Zuccon is a software developer with a passion for web development (RESTFull services, JS Frameworks), Elasticsearch, Spark, MongoDB, and agile processes. He runs whiletrue.run. Follow him on Twitter @matteo_zuccon

With Kibana you can create intuitive charts and dashboards. Since Aug 2016 you can export your dashboards in a PDF format thanks to Reporting. With Elastic version, 5 Reporting has been integrated into X-Pack for the Premium and Enterprise subscriptions.

Recently I tried Skedler, an easy to use report scheduling and distribution application for Kibana that allows you to centrally schedule and distribute Kibana Dashboards and Saved Searches as hourly/daily/weekly/monthly PDF, XLS or PNG reports to various stakeholders.

Skedler is a standalone app that allows you to utilize a new dashboard where you can manage Kibana reporting tasks (schedule, dashboards and saved search). Right now there are four different price plans (from free to premium edition).

In this post I am going to show you how to install Skedler (on Ubuntu) and how export/schedule a Kibana dashboard.

Install Pre-requisites

sudo apt-get -y update

sudo apt-get install -y libfontconfig1 libxcomposite1 libxdamage1 libcups2 libasound2 libxrandr2 libxfixes3 libnss3 libnss3-dev libxkbcommon-dev libgbm-dev libxshmfence-dev libatk1.0-0 libatk-bridge2.0-0 libgtk-3-0 gcc make

Install .deb package

Download the latest skedler-xg.deb file and extract it.  If you have previously installed the .deb package, remove it before installing the latest version.

curl -O https://skedler-v5-releases.s3.amazonaws.com/downloads/latest/skedler-xg.deb

sudo dpkg -i skedler-xg.deb

Install .tar.gz package

Download the latest skedler-xg.tar.gz file and extract it.

curl -O https://skedler-v5-releases.s3.amazonaws.com/downloads/latest/skedler-xg.tar.gz

sudo tar xzf skedler-xg.tar.gz

cd skedler-xg

sudo chmod -R 777 *

Configure your options for Skedler v5

Skedler Reports has a number of configuring options that can be defined in its reporting.yml file (located in the skedler folder).  In the reporting.yml file, you can configure options to run Skedler in an air-gapped environment, change the port number, define the hostname, change the location for the Skedler database, and log files.

Read more about the reporting.yml configuration options.

 

Start Skedler for .deb

To start Skedler, the command is:

sudo service skedler start

To check status, the command is:

sudo service skedler status

To stop Skedler. the command is:

sudo service skedler stop

Start Skedler for .tar.gz

To run Skedler manually, the command is:

sudo bin/skedler

To run Skedler as a service, the commands are:

sudo ./install_as_service.sh

To start Skedler, the command is:

sudo service skedler start

To check status, the command is:

sudo service skedler status

To stop Skedler. the command is:

sudo service skedler stop

Access Skedler Reports

The default URL for accessing Skedler Reports v5 is:

http://localhost:3005/

If you had made configuration changes in the reporting.yml, then the Skedler URL is of the following format:

http://<hostname or your domainurl>:3005

or

http://<hostname or your domain url>:<port number>

 

Login to Skedler Reports

By default, you will see the Create an account UI.  Enter your email to create an administrator account in Skedler Reports. Click on Continue.

 

Note: If you have configured an email address and password in reporting.yml, then you can skip the create account step and proceed to Login.

 

An account will be created and you will be redirected to the Login page.

 

Sign in using the following credentials:

Username: <your email address>   (or the email address you configured in reporting.yml)Password: admin   (or the password you configured in reporting.yml)

 

Click Sign in.

 

You will see the Reports Dashboard after logging in to the skedler account.   

In this post, I demonstrated how to install and configure Skedler and how to create a simple schedule for our Kibana dashboard. My overall impression of Skedler is that it is a powerful application to use side-by-side with Kibana that allows you to deliver reports directly to your stakeholders.

These are the main benefits that Skedler offers:

  • It’s easy to install
  • Linux, Windows  and Mac OS support (it runs on Node.js server)
  • Reports are generated locally (your data isn’t sent to the cloud or Skedler servers)
  • Competitive price plans
  • Supports Kibana and Grafana.
  • Automatically discovers your existing Kibana Dashboards and Saved Searches (so you can easily use Skedler in any environment with no new stack installation needed)
  • It lets you centrally schedule and manage who gets which reports and when they get them
  • Allows for hourly, weekly, monthly, and yearly schedules
  • Generates XLS and PNG reports besides PDF as opposed to Elastic Reporting that only supports PDF.
  • I strongly recommend that you try Skedler because it can help you to automatically deliver reports to your stakeholders and it integrates within your ELK environment without any modification to your stack.

Click here for free trial option.

You can find more resources about Skedler here:

Automated Kibana Reporting: A Marketer’s Guide

Skedler’s enhanced Kibana reporting solutions can do more than just enhance the way you manage information. It effectively transforms your Elasticsearch-Logstash-Kibana(ELK) platform into a business intelligence platform. Much of this is helped by Kibana’s dashboard, which applications such as Skedler improve on even further by incorporating actionable reporting, scheduling, and more. Marketing departments are particularly seeing the advantages of automating Kibana tasks through Skedler. They’re able to utilize ELK to gather intelligence, which is then easily exported as convenient, customer-friendly reports.

Two of our current case studies have demonstrated Kibana’s automated processes to be highly useful to their marketers.

Kane LPI, a third-party administration service specializing in investment and compliance, sought out Skedler to help them produce clear reports from thousands of lines of log entries from multiple systems. Kane LPI’s internal marketers needed to receive daily log reports in order to view daily analytics which could then be reviewed and sent to their clients. Skedler’s Kibana reporting solution produced clear scheduled reports from thousands of lines of log entries from multiple systems. This served as key monitoring tool while satisfying auditing requirements in the process.

Cybersecurity company Dynetics’ marketing department also benefitted from using the automated processes within Skedler. Their analysts simply didn’t have enough time to monitor all the dashboards within the NetAlert ELK stack application. However, after discovering Skedler, they saw that Skedler’s automated processes with Kibana allowed the department to quickly send security intelligence reports to customers. Without Skedler, this kind of process would have taken weeks to process manually, and their customers would have had to wait for answers.

Ready to start utilizing Skedler’s reporting solution for Kibana and engage with your customers? Try Skedler for free.

Skedler Version 2.6 is Released!

We are pleased to announce the availability of Skedler Version 2.6. It includes new features and bug fixes. New features include:

  • Email reports using Amazon SES which are now supported as an email provider in Skedler.
  • Skedler v2.6 now supports the following new versions of Elastic stack on both Linux and Windows:
    1. Elasticsearch version 1.7 to 5.1.1
    2. Kibana version from 4.1.x to 5.1.1
    3. Shield/Security supported from 1.0 to 2.4.1
    4. Kibana Shield plugin supported up to 2.2.1

Try the latest version of Skedler!

Skedler Plugin for Kibana is Coming Soon

By popular demand, we are reintroducing the Skedler Plugin for Kibana. It will include features of Skedler 2.6 and will work with Kibana 5.1. The plugin will be available in Standard, Advanced and Premier Editions and will be a separately licensed module.

Interested in being the first to use the Skedler Plugin? Email us at support@skedler.com and we will get in touch with you as soon as it becomes available!

Tip of the Month

How can I migrate my Skedler license to another server when I have already deployed Skedler on another server? This is a common question from our customers and is easy to do. Check out this how-to article on migrating your Skedler licenses.

Ready to start saving time by creating, scheduling and distributing Kibana reports automatically? Try Skedler free for 15 days.

The Top 3 ELK Stack Tools Every Business Intelligence Analyst Needs in 2017

A version of this post, updated for 2018, can be found here: The Top 5 ELK Stack+ Tools Every Business Intelligence Analyst Needs.

The world’s most popular log management platform, ELK Stack, has ultimately reflected its nifty, modernized capabilities with this recent statistic: each month, it is downloaded 500,000 times. So what makes ELK Stack and ELK Stack Tools just so attractive? In many cases, it fulfills what’s really been needed in the log analytics space within SaaS: IT companies are favoring open source products more and more. Since it’s based on the Lucene search engine, Elasticsearch is a NoSQL database which forms as a log pipeline tool; accepting inputs from various sources, executing transformations, then exporting data to designated targets. It also carries enhanced customizability, which is a key preference nowadays, since program tweaking is more lucrative and stimulating for many engineers. This is coupled with ELK’s increased interoperability, which is now a practically indispensable feature, since most businesses don’t want to be limited by proprietary data formats.

ELK Stack tools which simply higher-tier those impressive elements will elevate data analysis just that little bit further; depending on what you want to do with it, of course.

Logstash

Elite tool Logstash is well-known for its intake, processing and output capabilities. It’s mainly intended for organizing and searching for log files, but works effectively for cleaning and streaming big data from all sorts of sources into a comprehensive database, including metrics, web applications, data stores, and various AWS services. Logstash also carries impressive input plugins such as cloudwatch and graphite, allowing you to sculpt your intelligence to be as easy to work with as possible. And, as data travels from source to store, those filters identify named fields to accelerate your analysis; deciphering geo coordinates from IP addresses, and anonymizing PII data. It even derives structure from seemingly unstructured data.

Kibana 5

Analysis program Kibana 5.0 boasts a wealth of new refurbishments for pioneering intelligence surveying. Apart from amplified functionalities such as increased rendering, less CPU usage, and elevated data and index handling, Kibana 5.0 has enriched visualisations with interactive platforms, leveraging the aggregation capabilities of Elasticsearch. Space and time auditing are a crucial part of Kibana’s make up: the map service empowers you to foresee geospatial data with custom location data on a schematic of your selection, whilst the time series allows you to perform advanced generation analysis by describing queries and transformations.

Skedler

ELK Stack reporting tool, Skedler, combines all the automated processes you’d never dream you could have within one unit. Fundamentally, it ups your speed-to-market auditing with cutting-edge scheduling, which Kibana alone does not offer; serving as a single system for both interactive analysis and reporting. Skedler methodically picks up your existing dashboards in the server for cataloging, whilst also enabling you to create filters, refine specific recipients, and filter file folders to use whilst scheduling. Additionally, Skedler automatically applies prerequisite filters with generate reports, preserving them as defined; and encompasses high-resolution PDF and PNG options to incorporate in reporting, which sequentially eliminates the need for redundant reporting systems.

There you have it, the top ELK stack tools no business intelligence analyst should ever be without!

Ready to start streamlining your analysis and start reporting with more stability? Right now, we’re offering a free trial.

Are You Wasting Time Manually Sending Kibana Reports?

Automated processes are, invariably, becoming more and more integral to our everyday lives, both in and out of the office. They’ve replaced much of the manual workforce and have improved systematic procedures, which otherwise would be at the mercy of various human error elements as well as higher risks of data breaches. This, as well as recognizing manual reporting as time-consuming labour, are some key issues we don’t need to worry about any more by virtue of processing automation; Kibana being one of those favorable products.

Focus on What Matters

As a result of businesses adopting bots as part of our everyday processes, we’re left with the far more creative aspects of information science (which automation hasn’t quite caught up with yet). Naturally, Elasticsearch’s aesthetically enhanced data delivery is one of its chief selling points: users are able to explore unchartered data with clear-cut digital graphics at their very disposal. This significant upgrade in data technology has allowed us to possess more varied and complex insights; it’s more exciting now than it has ever been before.

In contrast, however, tedious tasks such as email deliveries of reports to customers, compliance managers and other stakeholders remain arduous and time-consuming; deterring attention from more stimulating in-depth data analysis. What we know to be necessary is for analysts to have the time available to devote themselves to exploring Tableau’s analytics, instead of undergoing mundane processes such as manual spreadsheet creation, generating, email exporting, and distributing.

Automate Kibana Reports

Perhaps it’s possible that you’ve already started utilizing Kibana without realizing the perks of automated scheduling. Luckily, Skedler can completely undertake those prosaic tasks, at an affordable price. As an automated scheme which meets full compliance and operations requirements, Skedler allows your peers, customers and other stakeholders to be kept informed in a virtually effortless and secure way. Comprehensive exporting preferences such as PDF, XLS and PNG are also serviceable; allowing you the luxury of consigning instant or scheduled report generation in the format you desire.

Additionally, Skedler’s reporting motions are facilitated through its prestigious dashboard system, which automatically discovers your existing Kibana dashboards and saved searches to make them available for reporting – again, saving you time creating, scheduling and sending Kibana reports. All your filtered reporting and data chartering is available on a single, versatile platform; meaning you won’t spend extensive amounts of time searching through your outgoing email reports for a specific item.

Skedler simply allows you to examine all of your criteria through one umbrella server with clear functionalities to separate the stunning data visualization deliveries, and the slightly less exciting archive of manual spreadsheet generation and handling for other departments, which it can totally manage by itself.

Ready to start saving time by creating, scheduling and distributing Kibana reports automatically? Try Skedler for free.

3 Apps to Get the Most Out of Kibana 5.0

A new financial quarter starts, full-scale data appraisals are once again at the forefront for every business’ sales agenda. Luckily, Elasticsearch’s open source tool Kibana 5.0 is the talk of the town – and for good reason.

Improvements since version 4.0 are unequivocally noticeable. Its new and far more sleek user interface display not only wows in terms of visuals (note the subsidiary menu that minimizes when not in use), but demonstrates impressive UI capabilities that allows you to reach data far more effectively. The new CSV upload, for example, has the potential to catch a much wider data spread, transforming it to index mapping that’s effortlessly navigable. Its new management tab allows you to view the history of the files with associated data, as well as Elasticsearch indexes where you actively send log files.

This version’s huge boost in code architecture grants the potential for more augmentations than ever, especially with split code self-contained plugins with open-end code tweaking, resulting in several lucrative alpha and beta versions. And it’s essentially allowed us the privilege to now ask: what kind of data insight does my company really need, and which app is best to harness it?

1. Logz.io

Logz.io has fundamentally enriched Kibana with two major touches: increased data security, and more serviceable enterprise sequences as a result. Take their access user tokens, for example, which enable share visualizations and dashboard with those who aren’t necessarily Logz.io users, rather than the URL share function. You can pretty much be as selective with your data as you so please; specific and cross-referenced filter searches are an added function to the tokens. This makes it easy to attach pre-saved filters when back in Kibana.

2. Skedler

Skedler has specifically focused developed reporting capabilities with actionables to perform on data, effectively meaning you can do more with it all in a proactive way. Scheduling is an integral part of this program’s faculty, as it works with your existing database searches and dashboards; allowing you to organize dispatches daily, weekly, monthly and so on. Again, you’re able to action specific filters as and when you’re scheduling, making your reports as customized as needed when sending for peer review.

3. Predix

Predix has established itself as a strong contender for effective data trend sweeps, such as HTTP responses, latencies and visitors – and you’re able to debug apps at the sam e time. Combining this with Kibana’s exhaustive data visualizations and pragmatic dashboard, controlling and managing your log data not only highly secure, but it allows you to become more prognostic when forecasting future data.

Ready to save hours generating, scheduling and distributing PDF and XLS reports from your
Elasticsearch Kibana (ELK) application to your team, customers and other stakeholders? Try Skedler for free.

Copyright © 2023 Guidanz Inc
Translate »