What's the point of using Logstash

Beginners Kibana dashboard tutorial

The Elasticsearch search and analytics engine is one of the best open source solutions for Indexing and structuring of large databases. However, when analyzing the raw data later, valuable insights can often only be gained if they are in a clear and easily understandable formvisualized become. The Kibana visualization tool has been specially developed for displaying Elasticsearch data so that it can be used in this Tutorial will go.

What is Kibana?

Kibana is an expandable web interface for the visual representation of collected data. Together with Elasticsearch and the data processing tool Logstash it forms the so-called ELK stack (also called the elastic stack). This open source suite enables users to transfer data from different Server sources (and in any format) to record, organize and process for analytical purposes. In addition to being able to visualize the data processed by Logstash and Elasticsearch, Kibana also offers a automatic real-time analysis, a very flexible search algorithm as different types of views (Histograms, graphs, pie charts, etc.) for the individual data. In the dashboard, the individual interactive visualizations can then be combined to form a dynamic overall picture that can be filtered and searched.

With another click you load the video from YouTube. In this case, YouTube can set cookies over which we have no influence.

As a web-based, in JavaScript written application is Kibana cross-platform usable. Costs only arise if you use the Elastic Cloud hosting service offered by the developer. This paid service allows you to have a Implement a secure Elasticsearch Kibana cluster on Amazon or Google and organize it without having to provide your own resources.

Kibana tutorial: getting started with the visualization tool

Kibana offers a huge range of functions that can be used to display processed databases. Before you can filter and visually display the information in the dashboard in such a way that the desired Key values Having an easy overview, analysis and long-term evaluation, however, there is a fair amount of work ahead of you. With this Kibana tutorial, we want to help you get started with the powerful web interface. This article explains how to install Kibana correctly, how to create your first dashboard and how to integrate existing data into Elastic's visualization tool.

Step 1: how to get Kibana working

Because Kibana was designed to represent data that was created with Elasticsearch have been indexed, the first thing you need to do is have the search and analytics engine installed. The corresponding Packages for Windows, macOS and Linux can be found in the Elasticsearch Download Center. The prerequisite is that a current Java runtime environment (64-bit) is installed.

Kibana itself is also available as cross-platform software for Windows, macOS and Linux (RPM, DEB) available. Since the application is based on the JavaScript runtime environment Node.js, the various installation packages also contain the necessary Node.js binary files that must be used to run the visualization tool - separately maintained versions are not supported. Like Elasticsearch, you can find the various packages (ZIP-compressed) on the Elastic homepage.

Linux and Mac users can use Kibana using the package manager apt and yum also via the Elastic repository to install. You can find detailed instructions for this in the online manuals.

Once you've extracted the Kibana package, run the bin / kibana (macOS, Linux) or bin \ kibana.bat (Windows) to get the Kibana server up and running.

Then you can Kibana backend via the address "localhost"In your browser (Elasticsearch must already be running for this).

Step 2: Feed Kibana with data

In order for us to be able to examine the Kibana dashboard and its functions in more detail in this tutorial, the application must first be supplied with data. There are three database samples that can be downloaded free of charge from the Elastic website and we are using them here for testing purposes. These are the three databases listed above "shakespeare.json"(Database of the complete works of William Shakespeare),"accounts.zip"(Set of fictitious accounts) and"logs.jsonl.gz“(Set of randomly generated log files).

Download and unzip the three files (account and log files) and then save them at the location of your choice.

Before you can feed in the data, it is necessary to Mappings (Images) for the fields of the Shakespeare and server log database. These assignments divide the documents in the index into logical groups and also specify the properties of the fields - such as their searchability. The right tool for configuring the mappings is the console, which you can find in the Kibana interface under the menu items "Dev tools"À"Console" Find.

Now insert the following mappings one after the other via PUT request:

Now use the Elasticsearch bulk API to load the data sets via CURL via the terminal. In Windows use PowerShell instead with the Invoke-RestMethod (code example below):

Depending on the computing power, feeding in the data records can take a few minutes.

Switch back to the Kibana console to see the success of the Verify the loading process with the following GET request:

If the data is integrated as planned, the output something like this:

Step 3: Define a first index pattern

In order for Kibana to know what data it should process, you must have the appropriate Pattern for the indices "shakespeare", "bank" and Create "logstash". You define the former as follows:

  1. Open the menu "management"And click on"Index patterns". When creating the first index pattern, the page "Create index pattern". Alternatively, you can call it up using the button of the same name.
  2. Give "shakes *" in the field "Index pattern"And then click on"Next step“.
  3. Since no special configuration is required for this pattern, skip the next setup step and finish creating the pattern directly by clicking on "Create index pattern“Click.

Repeat the steps for the pattern "ba *", Which is automatically assigned to the" bank "index.

Finally, you also define an index pattern with the name "logstash *“For the three server log indices. With this pattern, however, do not skip the configuration menu, but select the entry "@timestamp”In the“ Time Filter field name ”drop-down menu, as these records contain time series data. Then click on "Create index pattern“.

Step 4: browse pasted records

Now that you've fed your Kibana server with records, you can now create a Start an Elasticsearch search queryto search these records and filter the results. To do this, switch to the menu in Kibana "Discover“And choose over the small Triangle icon Select the index pattern for your search in the left menu bar. As part of this Kibana dashboard tutorial, we decided to use the account record (ba *):

As a test, you can now filter the bank account data record in order to only display accounts that have meet certain criteria. For example, to search specifically for accounts that contain a Account balance over 47,500 exhibit and belong to persons who over 38 years old enter the following command in the search box:

Discover then gives you the entries of the four accounts 97, 177, 878 and 916 that correspond to the selected properties.

Via the button "Save“In the top menu bar you can save the filtered searches under the desired name.

Step 5: visualize the data

With the preparations made so far in this Kibana tutorial, you are now able to visualize the implemented data to breathe life into your dashboard. As an example, a Pie chart for the bank accounts database be generated. On the one hand, this diagram is intended to show what proportion of the total of 1,000 accounts falls into certain account balance areas, and on the other hand, how the age-specific distribution within these areas is.

In the first step, open the menu "Visualize"And click on"Create a visualization"To get a list of the available visualization types. Then select the option "Pie“.

At the beginning you will only see a simple circle that summarizes all entries in the database, there no categories defined were. In Kibana these are also called "Buckets"And can be created under the menu item of the same name.

To first define the individual account balance categories, click on "Split slices"And select in the drop-down menu"Aggregation" the point "Range" out:

Under "Field"Now look for the entry"balance"And click on it before you then click four times on the"Add rangeClick the "button to be able to define the following six account balance categories:

0

999

1000

2999

3000

6999

7000

14999

15000

30999

31000

50000

Then click on "Apply changes“(Triangle symbol), whereupon the pie chart shows the distribution of the accounts taking into account the defined account balance categories.

In the second step, add another ring to the diagram, which visualizes the distribution of the age groups for the individual account balance areas. To do this, click on "Add sub-buckets", Then again on"Split slices"And then select"Terms" out. Search under "Field"After the entry"age"And apply the changes via"Apply changes“.

You can now save the visualization very easily using the "Save" button in the top menu bar.

Step 6: organize the dashboard

The Kibana dashboard is also to be briefly examined in this tutorial, which is why you now use the search or visualization saved in steps 4 and 5 to enter Create the first test dashboard. To do this, select the dashboard in the side navigation and then click on "Create new dashboard"And then on"Add". Kibana now automatically lists all saved visualizations or Elasticsearch searches:

With a left click you add the account balance visualization and the example search result to the dashboard, after which you can now view both in separate panels in the dashboard:

You can now modify the panels, for example by using the adjust size or the Change positioning. It is also possible to have a visualization or a search result displayed on the entire screen or again delete from the dashboard. With many visualizations, you can also use "Inspect" to display additional information about the underlying data and queries.

If you remove a panel from the Kibana dashboard, the saved visualization or search is retained.

Similar articles

Big data: definition and examples

We shop online, book our vacation trip and look for gift ideas without thinking about the fact that we leave a trace with every search query and every entry of our email address. Hard-working data octopuses collect this information - what is created is big data: mass data that is analyzed and used for countless purposes. But you really have to worry about your data ...

Data mining tools for stronger data analysis

In the digital age, data is growing in incalculable amounts, even in small and medium-sized companies. In order to extract the desired information from the data sets, data mining tools are used. These extract recurring patterns from the mass of data and make them accessible to marketers and statisticians. We provide the most important data mining software ...

Elasticsearch: The flexible search engine

When you work with large amounts of data, you need a powerful search engine: Elasticsearch offers you a full-text search that you can configure so that it is perfectly tailored to your needs. But first you have to understand the underlying principle. In our Elasticsearch tutorial we explain the first steps in using the search engine: from ...

Kafka tutorial: The first steps with Apache Kafka

The streaming and messaging software written in Scala is one of the most popular solutions when it comes to efficiently storing and processing large data streams. In this Kafka tutorial you will learn which requirements have to be met in order to use this open source software and how you can best manage the installation and setup of it.

InfluxDB: explanation, advantages and first steps

The open source database management system InfluxDB manages and visualizes data from time series databases in which thousands of data sets from the Internet of Things (IoT) or from sensor data are recorded in a constant data stream with a time stamp. The current version 2.0 runs as a cloud service with its own user interface and the new script and query language Flux.