Open-source: Attacker Intent Search empow logstash classification plugin & Elastic module

* Please note – the following guideline outlines how to install and configure the classification Logstash plugin and ELK module. In order to do so, you will need to register to the classification center located on this page, and follow the instructions bellow.

Video Tutorials:

Installation Guide

The Elastic stack allows you to ingest log data from many sources, parse and manipulate it, store it, analyze it, and visualize it. The stack consists of three components, Logstash, for data ingestion and manipulation, Elasticsearch, for storage and analysis of data, and Kibana, to visualize your data.

The empow classification plugin extends the functionality of logstash by classifying your log data, using the empow classification center, for attack intent and attack stage.

The empow module has preconfigured configurations for the entire Elastic stack and plugin, that you can use ‘out-of-the-box’ to ingest, store, and visualize log data from your network devices.

Register to the empow classification center

  • at least 8 characters as well as small and big letters and special characters

Installation Guide

The Elastic stack allows you to ingest log data from many sources, parse and manipulate it, store it, analyze it, and visualize it. The stack consists of three components, Logstash, for data ingestion and manipulation, Elasticsearch, for storage and analysis of data, and Kibana, to visualize your data.

The empow classification plugin extends the functionality of logstash by classifying your log data, using the empow classification center, for attack intent and attack stage.

The empow module has preconfigured configurations for the entire Elastic stack and plugin, that you can use ‘out-of-the-box’ to ingest, store, and visualize log data from your network devices.


Supported platforms

The plugin and module will run on any platform on which the Elastic stack is supported.

We will use Ubuntu 18.04 as the reference platform for this note.


What you will need

To get started, you will need to install these components:
Java 8
Logstashthe
empow plugin

After this, you can add the other elements of the stack, and the empow module:
Elasticsearch
Kibana
empow module


Get started with logstash
and the empow plugin

The procedures below use dpkg on Ubuntu to install the Elastic stack. Other methods to acquire and install packages can also be used (refer to Elastic)

Java

Check if Java is installed

Java must be installed before installing logstash. Run this commmand to check if Java is installed, and which version. If an error is returned, or the version is not Java 8, follow the steps to install Java.

java -version
Install Java 8
sudo apt-get install openjdk-8-jdk
Confirm the installation
java -version

This should return something like this:
openjdk version "1.8.0_162"
OpenJDK Runtime Environment (build 1.8.0_162-8u162-b12-1-b12)
OpenJDK 64-Bit Server VM (build 25.162-b12, mixed mode)

Logstash

We will install the Debian version 6.6.0. If there are later versions available, you can use them as well. Check this page for the link for this package (and other download options).

Install logstash

wget https://artifacts.elastic.co/downloads/logstash/logstash-6.6.0.deb
sudo dpkg -i logstash-6.6.0.deb

Check the service is running with this:
sudo service logstash status

empow classification plugin

The empow classification plugin enriches logs by adding information to show attack intent and attack stage, using emplow classification technology.

Install the plugin
sudo /usr/share/logstash/bin/logstash-plugin install logstash-filter-empowclassifier

Configure logstash for the plugin

Add this line to logstash.yml to enable automatic loading of configuration files (this conflicts with the module):

config.reload.automatic: true

Restart Logstash:
sudo service logstash restart

Create a logstash pipeline for the plugin (example)

This example illustrates how to create a full logstash pipeline that uses the empow plugin. A pipeline is a logstash configuration that receives logs from a source, filters them using the empow plugin (and other plugins), and then sends them to an output destination.

A logstash pipeline is a config file that consists of three main sections:

input – this defines the source for logs, and the way they are read by logstash

filter – a set of configuration processing and manipulation action on the logs, used to change its structure, or to extract, add, remove, and process, fields in the logs

output – defines the destination of the processed logs

Consider, for example, a snort IDS (Intrusion Detection System). As part of the configuration of the IDS, one should direct its logs to logstash. Let’s assume that these logs are sent using UDP protocol to port number 2055 (the destination address, the protocol and the port number are part of the snort configuration). In this case, configure the input section of the logstash pipeline like this:

    input{
      udp{
        port => 2055
      }
    }

This configures logstash to receive log sources on UDP port number 2055.

When logs enter the pipeline they are proceessed using filters. A typical default snort log structure has the following structure:

<NUMBER>MONTH DAY HOUR:MINUTE:SECOND snort[NUMBER]: [SIGNATURE_ID] DESCRIPTION [Priority: 3] {PROTOCOL} SRC_IP:SRC_PORT_NUMBER -> DST_IP:DST_PORT_NUMBER

Some of the field are optional. Here is an example:

<33>Jan 21 15:10:37 snort[11934]: [1:1234:1] (http_inspect) NO CONTENT-LENGTH OR TRANSFER-ENCODING IN HTTP RESPONSE [Classification: Unknown Traffic] [Priority: 3] {TCP} 192.168.21.120:80 -> 192.168.22.135:62508

Logstash has many methods and options to process logs, and in this case we will use the logstash grok plugin, which parses unstructured event data into fields similar to regular expression engines.

This filter configuration extracts the following fields from the snort log using grok:

    filter{
      grok{
        match => {"message" => "%{NUMBER}\>%{SYSLOGTIMESTAMP:date}snort\[%{NUMBER}\]: \[(?<sig_id>%{NUMBER}:%{NUMBER}):%{NUMBER}\] .\* %{IP:src_addr}(:%{NUMBER})? -> %{IP:dst_addr}(:%{NUMBER})?"}
      }
    }

where:

date – the log date

sig_id – the threat signature id (in the format NUMBER:NUMBER, where the third number is omitted)

src_addr – the source IP address

dst_addr – the destination IP address

In order to use the empow classification plugin, we add the following fields:

product_type (set to IDS in this example)

product_name (set to snort in this example)

threat – a JSON structure with the threat information (signature_id for IDS, or hash and malware name for Anti Virus or Anti Malware)

We will also use the logstash mutate and grok plugins, as follows:

    filter{
      grok{
        match => {"message" => "%{NUMBER}\>%{SYSLOGTIMESTAMP:date} snort\[%{NUMBER}\]: \[(?%{NUMBER}:%{NUMBER}):%{NUMBER}\] .* %{IP:src_addr}(:%{NUMBER})? -> %{IP:dst_addr}(:%{NUMBER})?"}
      }
      mutate{
        add_field => {"product_type" => "IDS" "product_name" => "snort"}
        add_field => {"[threat][signature]" => "%{sig_id}"}
      }
    }

Now, we will add the empowclassifier plugin to the pipeline, and set the classification center username and password (link to the registration page). Note, the plugin has more configuration options, beyond those described here.

    filter{
      grok{
        match => {"message" => "%{NUMBER}\>%{SYSLOGTIMESTAMP:date} snort\[%{NUMBER}\]: \[(?%{NUMBER}:%{NUMBER}):%{NUMBER}\] .* %{IP:src_addr}(:%{NUMBER})? -> %{IP:dst_addr}(:%{NUMBER})?"}
      }

      mutate{
        add_field => {"product_type" => "IDS" "product_name" => "snort"}
        add_field => {"[threat][signature]" => "%{sig_id}"}
      }
      empowclassifier {
        username => "*******" # replace with your username
        password => "*******" # replace with your password
      }
    }

Using this pipeline, the empowclassifier plugin will generate new fields containing the classification results from the snort logs (or error fields if there are problems).

A valid response will contain a JSON block, empow_classification_response, that includes an intents field. This field includes, among other things, the tactics and the attack stage of the attack (link to the page explaining this terms).

To complete the pipeline configuration, we will setup the output section. This determines where the filtered logs will be sent. Logs can be sent to many destinations, including databases, elastic nodes, etc. In our example, we will send the log by UDP to localhost port 1237. The output section is then:

    output{
      udp{
        host => "127.0.0.1"
        port => 1237
      }
    }

The entire pipeline configuration file now looks like this:

    input{
      udp{
        port => 2055
      }
    }

    filter{
      grok{
        match => {"message" => "%{NUMBER}\>%{SYSLOGTIMESTAMP:date} snort\[%{NUMBER}\]: \[(?<sig_id>%{NUMBER}:%{NUMBER}):%{NUMBER}\] .\* %{IP:src_addr}(:%{NUMBER})? -> %{IP:dst_addr}(:%{NUMBER})?"}
      }

      mutate{
        add_field => {"product_type" => "IDS" "product_name" => "snort"}
        add_field => {"[threat][signature]" => "%{sig_id}"}
      }
      empowclassifier {
        username => "*******" # replace with your username
        password => "*******" # replace with your password
      }
    }

    output{
      udp{
        host => "127.0.0.1"
        port => 1237
      }
    }

This file has the three components of a logstash config file: input & output sections, and a section defining the filtering actions. In this file, the filter refers to the empowclassifier (the plugin), which in turn accesses the empow cloud-based classification system. The input listens on UDP port 2055, and the output is sent to port 1237 on the localhost.

Copy the pipeline file (rename it snort_empow_example.conf, say) to the logstash config file folder (/etc/logstash/conf.d/), and wait a few seconds for logstash to load it.
sudo cp snort_empow_example.conf /etc/logstash/conf.d/
To test how logstash processes log strings using this example config file, open two terminal sessions, one to listen for logstash output, and one to send a log string to logstash:
In the first, enter:
nc -luk 1237
In the second, enter:
nc -u 127.0.0.1 2055

Then (still in the second console) enter:

1:1237<33>Jan 21 15:10:37 snort[11934]: [1:1234:1] (http_inspect) NO CONTENT-LENGTH OR TRANSFER-ENCODING IN HTTP RESPONSE [Classification: Unknown Traffic] [Priority: 3] {TCP} 192.168.21.120:80 -> 192.168.22.135:62508

In the first (the listener), the following should appear (after few seconds):

{“sig_id”:”1:1234″,”tags”:[“dummy”],”empow_warnings”:[“src_internal_wrong_value”,”dst_internal_wrong_value”],”empow_classification_response”:{“intents”:[{“tactic”:”Full compromise – active patterns”,”isSrcPerformer”:true,”attackStage”:”Infiltration”}]},”threat”:{“signature”:”1:1234″}}

This is the logstash output data block for the input string, filtered according to the config file, and with the empow classification fields included, obtained using plugin.

Register with empow to use the plugin


Use the empow module

The empow Elastic classification module is a preconfigured Elastic stack module that includes a simple configration of Logstash, Elasticsearch, and Kibana that can read security logs from many products and services, process them, enrich them using the empowclassifier plugin, store them in Elasticsearch, and anlyze them using a set of preconfigured Kibana visualizations and dashboards.

In order to use the empow module, download and install the module (in the future it will be available on Elastic as an open source module), install additional logstash plugins (used by the preconfigured empow module logstash pipeline), and install both Kibana and Elasticsearch.

In this guide, we assume that the entire Elastic stack (Logstash, Elasticsearch, and Kibana) is installed on the same node. If you install them on different nodes, or deploy a cluster of elasticsearch nodes, further configuration is required, that is not covered here.

Elasticsearch

Install elasticsearch

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.0.deb
sudo dpkg -i elasticsearch-6.6.0.deb

Start elasticsearch
sudo service elasticsearch start
Check elasticsearch is running
sudo service elasticsearch status
Run the following curl command
curl -X GET "localhost:9200/"
You should see something like this returned:

{
“name” : “I1Vqv5S”,
“cluster_name” : “elasticsearch”,
“cluster_uuid” : “I77C5K68RsSUFHknjXdxug”,
“version” : {
“number” : “6.6.0”,

},
“tagline” : “You Know, for Search”
}


Kibana

Install Kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.6.0-amd64.deb
sudo dpkg -i kibana-6.6.0-amd64.deb

Configure Kibana

Add the following line to the kibana configuration file (/etc/kibana/kibana.yml ) to enable remote access to kibana (skip this step if you are running your browser on the same host as Kibana):

server.host: “0.0.0.0”

Note: if an entry for server.host alreay exists, change it to “0.0.0.0”

Start the Kibana as a service
sudo service kibana start
Test that Kibana is running
sudo service kibana status
Open the following URL in a browser
http://localhost:5601
You should see the Kibana home page.


empow classification module

Download the empow module

Download and install the empow classification module (empow_classification_module.tar)

wget https://github.com/empow/empow-elk-module/archive/master.zip
unzip master.zip
sudo mv empow-elk-module-master /usr/share/logstash/modules/empow

Configure Logstash

In the file logstash.yml Comment out (by adding # in the beginning of the line) or remove the line that contains:

config.reload.automatic

Add the following lines:

modules:
  - name: empow
    var.elasticsearch.ssl.enabled: false
    var.kibana.ssl.enabled: false
    var.classification.username: "******"
    var.classification.pass: "******"
    var.input.udp.port: 2055

 


Install additional plugins, used by the module:

sudo /usr/share/logstash/bin/logstash-plugin install logstash-filter-translate logstash-filter-prune

Restart the logstash service:

sudo service logstash restart

Configure the empow Elastic stack module

As with any Elastic stack module, the empow module can be configured using variables that can be added to the YML configuration file, or included in the command line

Common Elastic module configuration variables (partial list)

Name Type Values Default
var.elasticsearch.ssl.enabled boolean true/false true
var.kibana.ssl.enabled boolean true/false true
var.elasticsearch.username string “”
var.elasticsearch.password string “”
var.elasticsearch.index_suffix string “%{+YYYY.MM.dd}”
var.elasticsearch.hosts list [“localhost:9200”]

empow module configuration variables

Name Type Values Default Description
var.input.udp.port integer 1-65535 514 listening port for incoming logs
var.internal.networks list [] list of internal networks; example: ‘[“10.0.1.0/24”, “11.68.0.0/16”]’
var.classification.username string classification center username
var.classification.pass string

 

Supported Products

The following is a list of products currently supported by the empow open-source module. (We are constantly updating this list so stay tuned for further updates):

Product Name Vendor Name Service Type
Firtigate Fortinet IDS
Threat Prevention Palo Alto IDS
Firepower Threat Defense Cisco IDS
Snort Snort IDS

Getting Help

For questions about the plugin and the module, open a topic in the Discuss forums or send an email to intent-classification@empow.co. For bugs or feature requests, open an issue in Github.