Deploying Applications Easily and Naturally on Tarantool Cartridge (Part 1)

Deploying Applications Easily and Naturally on Tarantool Cartridge (Part 1)

We have already talked about Tarantool Cartridge, which allows you to develop distributed applications and package them. There is nothing left: learn how to deploy and manage these applications. Don't worry, we've thought of everything! We have put together all the best practices for working with Tarantool Cartridge and wrote ansible-role, which will decompose the package into servers, start the instances, combine them into a cluster, configure authorization, bootstrap vshard, enable automatic failover and patch the cluster config.

Interesting? Then I ask under the cut, we will tell and show everything.

Let's start with an example

We will cover only part of the functionality of our role. You can always find a complete description of all its features and input parameters in documentation. But it's better to try once than see a hundred times, so let's deploy a small application.

Tarantool Cartridge has tutorial to create a small Cartridge application that stores information about bank customers and their accounts, and also provides an API for managing data via HTTP. To do this, the application describes two possible roles: api ΠΈ storagethat can be assigned to instances.

Cartridge itself does not say anything about how to start processes, it only provides the ability to configure already running instances. The user must do the rest himself: decompose the configuration files, start the services and set up the topology. But we will not do all this, Ansible will do it for us.

From words to deeds

So, let's deploy our application to two virtual machines and set up a simple topology:

  • Replicaset app-1 will play the role apiwhich includes the role vshard-router. There will be only one instance here.
  • replicaset storage-1 implements the role storage (and at the same time vshard-storage), here we add two instances from different machines.

Deploying Applications Easily and Naturally on Tarantool Cartridge (Part 1)

To run the example, we need Vagrant ΠΈ Ansible (version 2.8 or later).

The role itself is Ansible Galaxy. This is a repository that allows you to share your work and use ready-made roles.

Clone the repository with an example:

$ git clone https://github.com/dokshina/deploy-tarantool-cartridge-app.git
$ cd deploy-tarantool-cartridge-app && git checkout 1.0.0

We raise virtual machines:

$ vagrant up

Install the Tarantool Cartridge ansible role:

$ ansible-galaxy install tarantool.cartridge,1.0.1

Run the installed role:

$ ansible-playbook -i hosts.yml playbook.yml

We are waiting for the end of the execution of the playbook, go to http://localhost:8181/admin/cluster/dashboard and enjoy the result:

Deploying Applications Easily and Naturally on Tarantool Cartridge (Part 1)

You can pour data. Cool, right?

Now let's figure out how to work with this, and at the same time add another replica set to the topology.

We begin to understand

So, what happened?

We got two VMs up and running an ansible playbook that set up our cluster. Let's look at the contents of the file playbook.yml:

---
- name: Deploy my Tarantool Cartridge app
  hosts: all
  become: true
  become_user: root
  tasks:
  - name: Import Tarantool Cartridge role
    import_role:
      name: tarantool.cartridge

Nothing interesting happens here, we start the ansible-role, which is called tarantool.cartridge.

All the most important (namely, the cluster configuration) is located in inventory-file hosts.yml:

---
all:
  vars:
    # common cluster variables
    cartridge_app_name: getting-started-app
    cartridge_package_path: ./getting-started-app-1.0.0-0.rpm  # path to package

    cartridge_cluster_cookie: app-default-cookie  # cluster cookie

    # common ssh options
    ansible_ssh_private_key_file: ~/.vagrant.d/insecure_private_key
    ansible_ssh_common_args: '-o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'

  # INSTANCES
  hosts:
    storage-1:
      config:
        advertise_uri: '172.19.0.2:3301'
        http_port: 8181

    app-1:
      config:
        advertise_uri: '172.19.0.3:3301'
        http_port: 8182

    storage-1-replica:
      config:
        advertise_uri: '172.19.0.3:3302'
        http_port: 8183

  children:
    # GROUP INSTANCES BY MACHINES
    host1:
      vars:
        # first machine connection options
        ansible_host: 172.19.0.2
        ansible_user: vagrant

      hosts:  # instances to be started on the first machine
        storage-1:

    host2:
      vars:
        # second machine connection options
        ansible_host: 172.19.0.3
        ansible_user: vagrant

      hosts:  # instances to be started on the second machine
        app-1:
        storage-1-replica:

    # GROUP INSTANCES BY REPLICA SETS
    replicaset_app_1:
      vars:  # replica set configuration
        replicaset_alias: app-1
        failover_priority:
          - app-1  # leader
        roles:
          - 'api'

      hosts:  # replica set instances
        app-1:

    replicaset_storage_1:
      vars:  # replica set configuration
        replicaset_alias: storage-1
        weight: 3
        failover_priority:
          - storage-1  # leader
          - storage-1-replica
        roles:
          - 'storage'

      hosts:   # replica set instances
        storage-1:
        storage-1-replica:

All we need is to learn how to manage instances and replicasets by changing the contents of this file. Next, we will add new sections to it. In order not to get confused where to add them, you can peep into the final version of this file, hosts.updated.yml, which is in the example repository.

Instance management

In terms of Ansible, each instance is a host (not to be confused with an iron server), i.e. the infrastructure node that Ansible will manage. For each host, we can specify connection parameters (such as ansible_host ΠΈ ansible_user), as well as the instance configuration. Description of instances is in the section hosts.

Consider the instance configuration storage-1:

all:
  vars:
    ...

  # INSTANCES
  hosts:
    storage-1:
      config:
        advertise_uri: '172.19.0.2:3301'
        http_port: 8181

  ...

In a variable config we specified the instance parameters - advertise URI ΠΈ HTTP port.
Below are the instance parameters app-1 ΠΈ storage-1-replica.

We need to tell Ansible the connection parameters for each instance. It seems logical to group instances into virtual machine groups. To do this, instances are combined into groups. host1 ΠΈ host2, and in each group in the section vars values ansible_host ΠΈ ansible_user for one virtual machine. And in the section hosts - hosts (they are instances) that are included in this group:

all:
  vars:
    ...
  hosts:
    ...
  children:
    # GROUP INSTANCES BY MACHINES
    host1:
      vars:
        # first machine connection options
        ansible_host: 172.19.0.2
        ansible_user: vagrant
       hosts:  # instances to be started on the first machine
        storage-1:

     host2:
      vars:
        # second machine connection options
        ansible_host: 172.19.0.3
        ansible_user: vagrant
       hosts:  # instances to be started on the second machine
        app-1:
        storage-1-replica:

We start to change hosts.yml. Let's add two more instances, storage-2-replica on the first virtual machine and storage-2 On the second:

all:
  vars:
    ...

  # INSTANCES
  hosts:
    ...
    storage-2:  # <==
      config:
        advertise_uri: '172.19.0.3:3303'
        http_port: 8184

    storage-2-replica:  # <==
      config:
        advertise_uri: '172.19.0.2:3302'
        http_port: 8185

  children:
    # GROUP INSTANCES BY MACHINES
    host1:
      vars:
        ...
      hosts:  # instances to be started on the first machine
        storage-1:
        storage-2-replica:  # <==

    host2:
      vars:
        ...
      hosts:  # instances to be started on the second machine
        app-1:
        storage-1-replica:
        storage-2:  # <==
  ...

Run ansible playbook:

$ ansible-playbook -i hosts.yml 
                   --limit storage-2,storage-2-replica 
                   playbook.yml

Pay attention to the option --limit. Since each cluster instance is a host in Ansible terms, we can explicitly specify which instances should be configured when running the playbook.

Back to Web UI http://localhost:8181/admin/cluster/dashboard and observe our new instances:

Deploying Applications Easily and Naturally on Tarantool Cartridge (Part 1)

We will not rest on our laurels and will master topology control.

Topology management

Let's merge our new instances into a replicaset storage-2. Add a new group replicaset_storage_2 and describe in its variables the parameters of the replicaset by analogy with replicaset_storage_1... In the section hosts specify which instances will be included in this group (that is, our replica set):

---
all:
  vars:
    ...
  hosts:
    ...
  children:
    ...
    # GROUP INSTANCES BY REPLICA SETS
    ...
    replicaset_storage_2:  # <==
      vars:  # replicaset configuration
        replicaset_alias: storage-2
        weight: 2
        failover_priority:
          - storage-2
          - storage-2-replica
        roles:
          - 'storage'

      hosts:   # replicaset instances
        storage-2:
        storage-2-replica:

Let's start the playbook again:

$ ansible-playbook -i hosts.yml 
                   --limit replicaset_storage_2 
                   --tags cartridge-replicasets 
                   playbook.yml

In parameter --limit we this time passed the name of the group that corresponds to our replicaset.

Consider the option tags.

Our role sequentially performs various tasks, which are marked with the following tags:

  • cartridge-instances: instance management (configuration, connection to membership);
  • cartridge-replicasets: topology management (replicaset management and permanent removal (expel) of instances from the cluster);
  • cartridge-config: manage other cluster parameters (vshard bootstrapping, automatic failover mode, authorization parameters and application configuration).

We can explicitly specify what part of the work we want to do, then the role will skip the rest of the tasks. In our case, we want to work only with the topology, so we specified cartridge-replicasets.

Let's evaluate the result of our efforts. Finding a new replicaset http://localhost:8181/admin/cluster/dashboard.

Deploying Applications Easily and Naturally on Tarantool Cartridge (Part 1)

Hooray!

Experiment with reconfiguring instances and replicasets and see how the cluster topology changes. You can try different operational scenarios, for example, rolling update or increase memtx_memory. The role will try to do this without restarting the instance in order to reduce the possible downtime of your application.

Don't forget to run vagrant haltto stop VMs when you're done with them.

And what's under the hood?

Here I will talk more about what happened under the hood of the ansible role during our experiments.

Let's take a look at deploying a Cartridge application step by step.

Installing the package and starting instances

First you need to deliver the package to the server and install it. Now the role can work with RPM and DEB packages.

Next, we launch the instances. Everything is very simple here: each instance is a separate systemd-service. I'm talking about an example:

$ systemctl start myapp@storage-1

This command will launch the instance storage-1 apps myapp. The launched instance will look for its configuration Π² /etc/tarantool/conf.d/. Instance logs can be viewed using journald.

Unit file /etc/systemd/system/[email protected] for a systemd service will be delivered with the package.

Ansible has built-in modules for installing packages and managing systemd services, we haven't invented anything new here.

Configuring the cluster topology

And here the most interesting begins. Agree, it would be strange to bother with a special ansible role for installing packages and running systemd-services.

You can set up the cluster manually:

  • The first option: open the Web UI and click on the buttons. For a one-time start of several instances, it is quite suitable.
  • Second option: you can use the GraphQl API. Here you can already automate something, for example, write a script in Python.
  • The third option (for the strong in spirit): go to the server, connect to one of the instances using tarantoolctl connect and perform all the necessary manipulations with the Lua module cartridge.

The main task of our invention is to do this, the most difficult part of the work for you.

Ansible allows you to write your own module and use it in a role. Our role uses these modules to manage the various components of the cluster.

How it works? You describe the desired state of the cluster in a declarative config, and the role gives each module its configuration section as input. The module receives the current state of the cluster and compares it with the input. Next, a code is run through the socket of one of the instances, which brings the cluster to the desired state.

Results

Today we told and showed how to deploy your application on Tarantool Cartridge and set up a simple topology. To do this, we used Ansible, a powerful tool that is easy to use and allows you to simultaneously configure many infrastructure nodes (in our case, these are cluster instances).

Above, we dealt with one of the many ways to describe the cluster configuration using Ansible. Once you know you're ready to move on, learn best practices for writing playbooks. You may find it more convenient to manage the topology with group_vars ΠΈ host_vars.

Very soon we will tell you how to permanently remove (expel) instances from the topology, bootstrap vshard, manage automatic failover mode, configure authorization and patch the cluster config. In the meantime, you can study on your own documentation and experiment with changing the cluster parameters.

If something doesn't work, be sure inform us about the problem. We'll break it down quickly!

Source: habr.com

Add a comment