Fundamentals of Elasticsearch

Elasticsearch is a search engine with json rest api using Lucene and written in Java. A description of all the advantages of this engine is available at the official website. Further in the text, we will refer to Elasticsearch as ES.

Such engines are used for complex searches in the database of documents. For example, search taking into account the morphology of the language or search by geo coordinates.

In this article, I will cover the basics of ES using blog post indexing as an example. I'll show you how to filter, sort and search documents.

In order not to depend on the operating system, I will make all requests to ES using CURL. There is also a plugin for google chrome called sense.

The text contains links to documentation and other sources. At the end there are links for quick access to the documentation. Definitions of unfamiliar terms can be found in glossaries.

ES installation

To do this, we first need Java. Developers Recommend install Java versions newer than Java 8 update 20 or Java 7 update 55.

The ES distribution is available at developer site. After unpacking the archive, you need to run bin/elasticsearch. Also available packages for apt and yum. There is official image for docker. More about installation.

After installation and launch, check the performance:

# для удобства Π·Π°ΠΏΠΎΠΌΠ½ΠΈΠΌ адрСс Π² ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½ΡƒΡŽ
#export ES_URL=$(docker-machine ip dev):9200
export ES_URL=localhost:9200

curl -X GET $ES_URL

We will get an answer like this:

{
  "name" : "Heimdall",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.2.1",
    "build_hash" : "d045fc29d1932bce18b2e65ab8b297fbf6cd41a1",
    "build_timestamp" : "2016-03-09T09:38:54Z",
    "build_snapshot" : false,
    "lucene_version" : "5.4.1"
  },
  "tagline" : "You Know, for Search"
}

Π˜Π½Π΄Π΅ΠΊΡΠ°Ρ†ΠΈΡ

Let's add a post to ES:

# Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚ c id 1 Ρ‚ΠΈΠΏΠ° post Π² индСкс blog.
# ?pretty ΡƒΠΊΠ°Π·Ρ‹Π²Π°Π΅Ρ‚, Ρ‡Ρ‚ΠΎ Π²Ρ‹Π²ΠΎΠ΄ Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±Ρ‹Ρ‚ΡŒ Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΠΎ-Ρ‡ΠΈΡ‚Π°Π΅ΠΌΡ‹ΠΌ.

curl -XPUT "$ES_URL/blog/post/1?pretty" -d'
{
  "title": "ВСсСлыС котята",
  "content": "<p>БмСшная история ΠΏΡ€ΠΎ котят<p>",
  "tags": [
    "котята",
    "смСшная история"
  ],
  "published_at": "2014-09-12T20:44:42+00:00"
}'

server response:

{
  "_index" : "blog",
  "_type" : "post",
  "_id" : "1",
  "_version" : 1,
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "created" : false
}

ES automatically created index blog and type post. You can draw a conditional analogy: an index is a database, and a type is a table in this database. Each type has its own schema βˆ’ mapping, just like a relational table. Mapping is generated automatically when the document is indexed:

# ΠŸΠΎΠ»ΡƒΡ‡ΠΈΠΌ mapping всСх Ρ‚ΠΈΠΏΠΎΠ² индСкса blog
curl -XGET "$ES_URL/blog/_mapping?pretty"

In the server response, I added the values ​​of the fields of the indexed document in the comments:

{
  "blog" : {
    "mappings" : {
      "post" : {
        "properties" : {
          /* "content": "<p>БмСшная история ΠΏΡ€ΠΎ котят<p>", */ 
          "content" : {
            "type" : "string"
          },
          /* "published_at": "2014-09-12T20:44:42+00:00" */
          "published_at" : {
            "type" : "date",
            "format" : "strict_date_optional_time||epoch_millis"
          },
          /* "tags": ["котята", "смСшная история"] */
          "tags" : {
            "type" : "string"
          },
          /*  "title": "ВСсСлыС котята" */
          "title" : {
            "type" : "string"
          }
        }
      }
    }
  }
}

It's worth noting that ES makes no distinction between a single value and an array of values. For example, the title field contains just a title, and the tags field contains an array of strings, although they are represented in the mapping in the same way.
We'll talk about mapping more similarly later.

Inquiries

Retrieve a document by its id:

# ΠΈΠ·Π²Π»Π΅Ρ‡Π΅ΠΌ Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚ с id 1 Ρ‚ΠΈΠΏΠ° post ΠΈΠ· индСкса blog
curl -XGET "$ES_URL/blog/post/1?pretty"
{
  "_index" : "blog",
  "_type" : "post",
  "_id" : "1",
  "_version" : 1,
  "found" : true,
  "_source" : {
    "title" : "ВСсСлыС котята",
    "content" : "<p>БмСшная история ΠΏΡ€ΠΎ котят<p>",
    "tags" : [ "котята", "смСшная история" ],
    "published_at" : "2014-09-12T20:44:42+00:00"
  }
}

New keys appeared in the response: _version ΠΈ _source. In general, all keys starting with _ belong to the service.

Key _version shows the version of the document. It is needed for the optimistic locking mechanism to work. For example, we want to change a document that has version 1. We submit a modified document and indicate that this is an edit of a document with version 1. If someone else also edited a document with version 1 and submitted changes before us, then ES will not accept our changes, because it stores the document with version 2.

Key _source contains the document that we indexed. ES does not use this value for search operations, as indexes are used for searching. To save space, ES keeps the original document compressed. If we need only the id, and not the entire source document, then we can disable source storage.

If we do not need additional information, we can only get the contents of _source:

curl -XGET "$ES_URL/blog/post/1/_source?pretty"
{
  "title" : "ВСсСлыС котята",
  "content" : "<p>БмСшная история ΠΏΡ€ΠΎ котят<p>",
  "tags" : [ "котята", "смСшная история" ],
  "published_at" : "2014-09-12T20:44:42+00:00"
}

You can also select only specific fields:

# ΠΈΠ·Π²Π»Π΅Ρ‡Π΅ΠΌ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΏΠΎΠ»Π΅ title
curl -XGET "$ES_URL/blog/post/1?_source=title&pretty"
{
  "_index" : "blog",
  "_type" : "post",
  "_id" : "1",
  "_version" : 1,
  "found" : true,
  "_source" : {
    "title" : "ВСсСлыС котята"
  }
}

Let's index a few more posts and run more complex queries.

curl -XPUT "$ES_URL/blog/post/2" -d'
{
  "title": "ВСсСлыС Ρ‰Π΅Π½ΠΊΠΈ",
  "content": "<p>БмСшная история ΠΏΡ€ΠΎ Ρ‰Π΅Π½ΠΊΠΎΠ²<p>",
  "tags": [
    "Ρ‰Π΅Π½ΠΊΠΈ",
    "смСшная история"
  ],
  "published_at": "2014-08-12T20:44:42+00:00"
}'
curl -XPUT "$ES_URL/blog/post/3" -d'
{
  "title": "Как Ρƒ мСня появился ΠΊΠΎΡ‚Π΅Π½ΠΎΠΊ",
  "content": "<p>Π”ΡƒΡˆΠ΅Ρ€Π°Π·Π΄ΠΈΡ€Π°ΡŽΡ‰Π°Ρ история ΠΏΡ€ΠΎ Π±Π΅Π΄Π½ΠΎΠ³ΠΎ ΠΊΠΎΡ‚Π΅Π½ΠΊΠ° с ΡƒΠ»ΠΈΡ†Ρ‹<p>",
  "tags": [
    "котята"
  ],
  "published_at": "2014-07-21T20:44:42+00:00"
}'

Sorting

# Π½Π°ΠΉΠ΄Π΅ΠΌ послСдний пост ΠΏΠΎ Π΄Π°Ρ‚Π΅ ΠΏΡƒΠ±Π»ΠΈΠΊΠ°Ρ†ΠΈΠΈ ΠΈ ΠΈΠ·Π²Π»Π΅Ρ‡Π΅ΠΌ поля title ΠΈ published_at
curl -XGET "$ES_URL/blog/post/_search?pretty" -d'
{
  "size": 1,
  "_source": ["title", "published_at"],
  "sort": [{"published_at": "desc"}]
}'
{
  "took" : 8,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 3,
    "max_score" : null,
    "hits" : [ {
      "_index" : "blog",
      "_type" : "post",
      "_id" : "1",
      "_score" : null,
      "_source" : {
        "title" : "ВСсСлыС котята",
        "published_at" : "2014-09-12T20:44:42+00:00"
      },
      "sort" : [ 1410554682000 ]
    } ]
  }
}

We chose the last post. size limits the number of documents in the issuance. total shows the total number of documents matching the query. sort in the output contains an array of integers by which sorting is performed. Those. the date has been converted to an integer. You can read more about sorting in documentation.

Filters and Queries

ES since version 2 does not distinguish between filters and requests, instead the concept of contexts is introduced.
The request context differs from the filter context in that the request generates a _score and is not cached. What is _score I will show later.

Date filtering

Using a query range in the filter context:

# ΠΏΠΎΠ»ΡƒΡ‡ΠΈΠΌ посты, ΠΎΠΏΡƒΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½Π½Ρ‹Π΅ 1ΠΎΠ³ΠΎ сСнтября ΠΈΠ»ΠΈ ΠΏΠΎΠ·ΠΆΠ΅
curl -XGET "$ES_URL/blog/post/_search?pretty" -d'
{
  "filter": {
    "range": {
      "published_at": { "gte": "2014-09-01" }
    }
  }
}'

Filtering by tags

Use term query to search for the id of documents containing a given word:

# Π½Π°ΠΉΠ΄Π΅ΠΌ всС Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Ρ‹, Π² ΠΏΠΎΠ»Π΅ tags ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Ρ… Π΅ΡΡ‚ΡŒ элСмСнт 'котята'
curl -XGET "$ES_URL/blog/post/_search?pretty" -d'
{
  "_source": [
    "title",
    "tags"
  ],
  "filter": {
    "term": {
      "tags": "котята"
    }
  }
}'
{
  "took" : 9,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "blog",
      "_type" : "post",
      "_id" : "1",
      "_score" : 1.0,
      "_source" : {
        "title" : "ВСсСлыС котята",
        "tags" : [ "котята", "смСшная история" ]
      }
    }, {
      "_index" : "blog",
      "_type" : "post",
      "_id" : "3",
      "_score" : 1.0,
      "_source" : {
        "title" : "Как Ρƒ мСня появился ΠΊΠΎΡ‚Π΅Π½ΠΎΠΊ",
        "tags" : [ "котята" ]
      }
    } ]
  }
}

Full text search

Our three documents contain the following in the content field:

  • <p>БмСшная история ΠΏΡ€ΠΎ котят<p>
  • <p>БмСшная история ΠΏΡ€ΠΎ Ρ‰Π΅Π½ΠΊΠΎΠ²<p>
  • <p>Π”ΡƒΡˆΠ΅Ρ€Π°Π·Π΄ΠΈΡ€Π°ΡŽΡ‰Π°Ρ история ΠΏΡ€ΠΎ Π±Π΅Π΄Π½ΠΎΠ³ΠΎ ΠΊΠΎΡ‚Π΅Π½ΠΊΠ° с ΡƒΠ»ΠΈΡ†Ρ‹<p>

Use match query to search for the id of documents containing a given word:

# source: false ΠΎΠ·Π½Π°Ρ‡Π°Π΅Ρ‚, Ρ‡Ρ‚ΠΎ Π½Π΅ Π½ΡƒΠΆΠ½ΠΎ ΠΈΠ·Π²Π»Π΅ΠΊΠ°Ρ‚ΡŒ _source Π½Π°ΠΉΠ΄Π΅Π½Π½Ρ‹Ρ… Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚ΠΎΠ²
curl -XGET "$ES_URL/blog/post/_search?pretty" -d'
{
  "_source": false,
  "query": {
    "match": {
      "content": "история"
    }
  }
}'
{
  "took" : 13,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 3,
    "max_score" : 0.11506981,
    "hits" : [ {
      "_index" : "blog",
      "_type" : "post",
      "_id" : "2",
      "_score" : 0.11506981
    }, {
      "_index" : "blog",
      "_type" : "post",
      "_id" : "1",
      "_score" : 0.11506981
    }, {
      "_index" : "blog",
      "_type" : "post",
      "_id" : "3",
      "_score" : 0.095891505
    } ]
  }
}

However, if we look for "stories" in the content field, we will not find anything, because the index contains only the original words, not their stems. In order to make a high-quality search, you need to configure the analyzer.

Field _score shows relevance. If the request is made in the filter context, then the value of _score will always be 1, which means that the filter fully matches.

Analyzers

Analyzers are needed to convert the source text into a set of tokens.
Analyzers consist of one Tokenizer and a few optional TokenFilters. Tokenizer may be preceded by several CharFilters. Tokenizers break the source string into tokens, such as spaces and punctuation. TokenFilter can change tokens, remove or add new ones, for example, leave only the stem of a word, remove prepositions, add synonyms. CharFilter - changes the entire source string, for example, cuts out html tags.

ES has several standard analyzers. For example, parser russian.

Let's use api and see how the standard and russian parsers convert the string "Funny stories about kittens":

# ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌ Π°Π½Π°Π»ΠΈΠ·Π°Ρ‚ΠΎΡ€ standard       
# ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ Π½ΡƒΠΆΠ½ΠΎ ΠΏΠ΅Ρ€Π΅ΠΊΠΎΠ΄ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ Π½Π΅ ASCII символы
curl -XGET "$ES_URL/_analyze?pretty&analyzer=standard&text=%D0%92%D0%B5%D1%81%D0%B5%D0%BB%D1%8B%D0%B5%20%D0%B8%D1%81%D1%82%D0%BE%D1%80%D0%B8%D0%B8%20%D0%BF%D1%80%D0%BE%20%D0%BA%D0%BE%D1%82%D1%8F%D1%82"
{
  "tokens" : [ {
    "token" : "вСсСлыС",
    "start_offset" : 0,
    "end_offset" : 7,
    "type" : "<ALPHANUM>",
    "position" : 0
  }, {
    "token" : "истории",
    "start_offset" : 8,
    "end_offset" : 15,
    "type" : "<ALPHANUM>",
    "position" : 1
  }, {
    "token" : "ΠΏΡ€ΠΎ",
    "start_offset" : 16,
    "end_offset" : 19,
    "type" : "<ALPHANUM>",
    "position" : 2
  }, {
    "token" : "котят",
    "start_offset" : 20,
    "end_offset" : 25,
    "type" : "<ALPHANUM>",
    "position" : 3
  } ]
}
# ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌ Π°Π½Π°Π»ΠΈΠ·Π°Ρ‚ΠΎΡ€ russian
curl -XGET "$ES_URL/_analyze?pretty&analyzer=russian&text=%D0%92%D0%B5%D1%81%D0%B5%D0%BB%D1%8B%D0%B5%20%D0%B8%D1%81%D1%82%D0%BE%D1%80%D0%B8%D0%B8%20%D0%BF%D1%80%D0%BE%20%D0%BA%D0%BE%D1%82%D1%8F%D1%82"
{
  "tokens" : [ {
    "token" : "вСсСл",
    "start_offset" : 0,
    "end_offset" : 7,
    "type" : "<ALPHANUM>",
    "position" : 0
  }, {
    "token" : "истор",
    "start_offset" : 8,
    "end_offset" : 15,
    "type" : "<ALPHANUM>",
    "position" : 1
  }, {
    "token" : "ΠΊΠΎΡ‚",
    "start_offset" : 20,
    "end_offset" : 25,
    "type" : "<ALPHANUM>",
    "position" : 3
  } ]
}

The standard analyzer split the string by spaces and converted everything to lower case, the russian analyzer removed unimportant words, converted to lower case and left the stem of the words.

Let's see which Tokenizer, TokenFilters, CharFilters the russian analyzer uses:

{
  "filter": {
    "russian_stop": {
      "type":       "stop",
      "stopwords":  "_russian_"
    },
    "russian_keywords": {
      "type":       "keyword_marker",
      "keywords":   []
    },
    "russian_stemmer": {
      "type":       "stemmer",
      "language":   "russian"
    }
  },
  "analyzer": {
    "russian": {
      "tokenizer":  "standard",
      /* TokenFilters */
      "filter": [
        "lowercase",
        "russian_stop",
        "russian_keywords",
        "russian_stemmer"
      ]
      /* CharFilters ΠΎΡ‚ΡΡƒΡ‚ΡΡ‚Π²ΡƒΡŽΡ‚ */
    }
  }
}

Let's describe our analyzer based on russian, which will strip html tags. Let's call it default, because an analyzer with this name will be used by default.

{
  "filter": {
    "ru_stop": {
      "type":       "stop",
      "stopwords":  "_russian_"
    },
    "ru_stemmer": {
      "type":       "stemmer",
      "language":   "russian"
    }
  },
  "analyzer": {
    "default": {
      /* добавляСм ΡƒΠ΄Π°Π»Π΅Π½ΠΈΠ΅ html Ρ‚Π΅Π³ΠΎΠ² */
      "char_filter": ["html_strip"],
      "tokenizer":  "standard",
      "filter": [
        "lowercase",
        "ru_stop",
        "ru_stemmer"
      ]
    }
  }
}

First, all html tags will be removed from the source string, then it will be split into tokenizer standard tokens, the resulting tokens will be converted to lower case, insignificant words will be deleted, and the stem of the word will remain from the remaining tokens.

Create an index

We have described the default parser above. It will apply to all string fields. Our post contains an array of tags, so the tags will also be processed by the analyzer. Because Since we are looking for posts by exact match to a tag, we need to turn off parsing for the tags field.

Let's create a blog2 index with an analyzer and mapping, in which parsing of the tags field is disabled:

curl -XPOST "$ES_URL/blog2" -d'
{
  "settings": {
    "analysis": {
      "filter": {
        "ru_stop": {
          "type": "stop",
          "stopwords": "_russian_"
        },
        "ru_stemmer": {
          "type": "stemmer",
          "language": "russian"
        }
      },
      "analyzer": {
        "default": {
          "char_filter": [
            "html_strip"
          ],
          "tokenizer": "standard",
          "filter": [
            "lowercase",
            "ru_stop",
            "ru_stemmer"
          ]
        }
      }
    }
  },
  "mappings": {
    "post": {
      "properties": {
        "content": {
          "type": "string"
        },
        "published_at": {
          "type": "date"
        },
        "tags": {
          "type": "string",
          "index": "not_analyzed"
        },
        "title": {
          "type": "string"
        }
      }
    }
  }
}'

Let's add the same 3 posts to this index (blog2). I will omit this process, because. it is similar to adding documents to the blog index.

Expression-enabled full-text search

Let's get acquainted with another type of requests:

# Π½Π°ΠΉΠ΄Π΅ΠΌ Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Ρ‹, Π² ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Ρ… встрСчаСтся слово 'истории'
# query -> simple_query_string -> query содСрТит поисковый запрос
# ΠΏΠΎΠ»Π΅ title ΠΈΠΌΠ΅Π΅Ρ‚ ΠΏΡ€ΠΈΠΎΡ€ΠΈΡ‚Π΅Ρ‚ 3
# ΠΏΠΎΠ»Π΅ tags ΠΈΠΌΠ΅Π΅Ρ‚ ΠΏΡ€ΠΈΠΎΡ€ΠΈΡ‚Π΅Ρ‚ 2
# ΠΏΠΎΠ»Π΅ content ΠΈΠΌΠ΅Π΅Ρ‚ ΠΏΡ€ΠΈΠΎΡ€ΠΈΡ‚Π΅Ρ‚ 1
# ΠΏΡ€ΠΈΠΎΡ€ΠΈΡ‚Π΅Ρ‚ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ ΠΏΡ€ΠΈ Ρ€Π°Π½ΠΆΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠΈ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ΠΎΠ²
curl -XPOST "$ES_URL/blog2/post/_search?pretty" -d'
{
  "query": {
    "simple_query_string": {
      "query": "истории",
      "fields": [
        "title^3",
        "tags^2",
        "content"
      ]
    }
  }
}'

Because Since we are using an analyzer with Russian stemming, this query will return all documents, even though they contain only the word 'history'.

The request may contain special characters, for example:

""fried eggs" +(eggplant | potato) -frittata"

Request syntax:

+ signifies AND operation
| signifies OR operation
- negates a single token
" wraps a number of tokens to signify a phrase for searching
* at the end of a term signifies a prefix query
( and ) signify precedence
~N after a word signifies edit distance (fuzziness)
~N after a phrase signifies slop amount
# Π½Π°ΠΉΠ΄Π΅ΠΌ Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Ρ‹ Π±Π΅Π· слова 'Ρ‰Π΅Π½ΠΊΠΈ'
curl -XPOST "$ES_URL/blog2/post/_search?pretty" -d'
{
  "query": {
    "simple_query_string": {
      "query": "-Ρ‰Π΅Π½ΠΊΠΈ",
      "fields": [
        "title^3",
        "tags^2",
        "content"
      ]
    }
  }
}'

# ΠΏΠΎΠ»ΡƒΡ‡ΠΈΠΌ 2 поста ΠΏΡ€ΠΎ ΠΊΠΎΡ‚ΠΈΠΊΠΎΠ²

references

PS

If you are interested in such articles-lessons, have ideas for new articles or have proposals for cooperation, then I will be glad to receive a message in a personal or email [email protected].

Source: habr.com