How to Manage Cloud Infrastructure with Terraform

How to Manage Cloud Infrastructure with Terraform

In this article, we will look at what Terraform consists of, and also launch our own infrastructure in stages in the cloud with VMware - prepare three VMs for different purposes: proxy, file storage and CMS.

Everything in detail and in three stages:

1. Terraform - description, benefits and components

Terraform is an IaC (Infrastructure-as-Code) tool for building and managing virtual infrastructure using code.

In working with the tool, we noted several advantages:

  • The speed of deployment of new tenants (custom virtual environments). Usually, the more new clients, the more "clicks" the technical support staff needs to make to publish new resources. With Terraform, users can change the parameters of virtual machines (for example, automatically shut down the OS and increase the virtual disk partition) without the participation of technical support and shutting down the machine itself.

  • Instant verification of the activation plan new tennant. Using the description of the infrastructure code, we can immediately check what will be added and in what order, as well as in what final state this or that virtual machine or virtual network with connections to virtual machines will be.

  • Ability to describe most popular cloud platforms. You can use the tool from Amazon and Google Cloud to private platforms based on VMware vCloud Director offering IaaS, SaaS and PaaS solutions.

  • Manage multiple cloud providers and distribute infrastructure between them to improve fault tolerance, using one configuration to create, diagnose and manage cloud resources.

  • Convenient use for creating demo stands for software testing and debugging. You can create and transfer benches for the testing department, test software in parallel in different environments, and instantly change and delete resources by creating just one resource building plan

"Terrarium" Terraform

We briefly talked about the advantages of the tool, now we will analyze it into its components

Providers (providers). 

In Terraform, almost any type of infrastructure can be represented as a resource. The connection between resources and the API platform is provided by provider modules, which allow you to create resources within a specific platform, such as Azure or VMware vCloud Director.

Within a project, you can interact with different providers on different platforms.

Resources (description of resources).

Description of resources allows you to manage platform components, such as virtual machines or networks. 

You can create a resource description for a VMware vCloud Director provider yourself and use this description to create resources for any hosting provider that uses vCloud Director. You only need to change the authentication parameters and network connection parameters to the required hosting provider

Provisioners.

This component makes it possible to perform operations for the initial installation and maintenance of the operating system after the creation of virtual machines. Once you have created a virtual machine resource, you can use provisioners to configure and connect via SSH, upgrade the operating system, and download and execute a script. 

Variables Input and Output.

Input variables — input variables for any block types. 

Output variables allow values ​​to be saved after resource creation and can be used as input variables in other modules, such as the Provisioners block.

states.

States files store information about the provider's platform resource configuration. When the platform is first created, there is no information about the resources, and before any operation, Terraform updates the state with the real infrastructure of the already described resources.

The main purpose of states is to save a bunch of objects of already created resources for comparing the configuration of added resources and objects in order to avoid re-creation and changes to the platform.

State information is stored locally in the terraform.tfstate file by default, but you can use remote storage for team work if needed.

You can also import the current platform resources into the state in order to further interact with other resources, which in turn were created without the help of Terraform.  

2. Creation of infrastructure

The components have been dismantled, now with the help of Terraform we will gradually create an infrastructure with three virtual machines. The first one with nginx proxy server installed, the second one with Nextcloud based file storage and the third one with Bitrix CMS.

We will write code and execute it using the example of our clouds on VMware vCloud Director. With us, users get an account with Organization Administrator rights, if you use an account with the same rights in another VMware cloud, you can reproduce the code from our examples. Go!

First, let's create a directory for our new project, which will contain files describing the infrastructure.

mkdir project01

Then we describe the components of the infrastructure. Terraform creates links and processes files based on the description in the files. The files themselves can be named based on the purpose of the described blocks, for example, network.tf - describes the network parameters for the infrastructure.

To describe the components of our infrastructure, we have created the following files:

List of files.

main.tf - description of parameters for the virtual environment - virtual machines, virtual containers;

network.tf - description of virtual network parameters and NAT, Firewall rules;

variables.tf - a list of variables that we use;

vcd.tfvars - project variable values ​​for the VMware vCloud Director module.

The configuration language in Terraform is declarative and the order of blocks does not matter, except for provisioner blocks, because in this block, we describe the commands to be executed when preparing the infrastructure and they will be executed in order.

Block structure.

<BLOCK TYPE> "<BLOCK LABEL>" "<BLOCK LABEL>" {

# Block body

<IDENTIFIER> = <EXPRESSION> # Argument

}

Blocks are described using their own programming language HCL (HashiCorp Configuration Language), it is possible to describe the infrastructure using JSON. For more information on syntax, see read on the developer's website.

Environment variable configuration, variables.tf and vcd.tfvars

First, let's create two files that describe the list of all variables used and their values ​​for the VMware vCloud Director module. First, let's create the variables.tf file.

The contents of the variables.tf file.

variable "vcd_org_user" {

  description = "vCD Tenant User"

}

variable "vcd_org_password" {

  description = "vCD Tenant Password"

}

variable "vcd_org" {

  description = "vCD Tenant Org"

}

variable "vcd_org_vdc" {

  description = "vCD Tenant VDC"

}

variable "vcd_org_url" {

  description = "vCD Tenant URL"

}

variable "vcd_org_max_retry_timeout" {

  default = "60"

}

variable "vcd_org_allow_unverified_ssl" {

  default = "true"

}

variable "vcd_org_edge_name" {

  description = "vCD edge name"

}

variable "vcd_org_catalog" {

  description = "vCD public catalog"

}

variable "vcd_template_os_centos7" {

  description = "OS CentOS 7"

  default = "CentOS7"

}

variable "vcd_org_ssd_sp" {

  description = "Storage Policies"

  default = "Gold Storage Policy"

}

variable "vcd_org_hdd_sp" {

  description = "Storage Policies"

  default = "Bronze Storage Policy"

}

variable "vcd_edge_local_subnet" {

  description = "Organization Network Subnet"

}

variable "vcd_edge_external_ip" {

  description = "External public IP"

}

variable "vcd_edge_local_ip_nginx" {}

variable "vcd_edge_local_ip_bitrix" {}

variable "vcd_edge_local_ip_nextcloud" {}

variable "vcd_edge_external_network" {}

Variable values ​​that we receive from the provider.

  • vcd_org_user - username with Organization Administrator rights,

  • vcd_org_password - user password,

  • vcd_org - organization name,

  • vcd_org_vdc - the name of the virtual data center,

  • vcd_org_url - API URL,

  • vcd_org_edge_name - virtual router name,

  • vcd_org_catalog - the name of the directory with virtual machine templates,

  • vcd_edge_external_ip - public IP address,

  • vcd_edge_external_network - the name of the external network,

  • vcd_org_hdd_sp - HDD storage policy name,

  • vcd_org_ssd_sp is the name of the SSD storage policy.

And enter our variables:

  • vcd_edge_local_ip_nginx - IP address of the virtual machine with NGINX,

  • vcd_edge_local_ip_bitrix - IP address of the virtual machine with 1C: Bitrix,

  • vcd_edge_local_ip_nextcloud - IP address of the virtual machine with Nextcloud.

In the second file, we create and specify variables for the VMware vCloud Director module in the vcd.tfvars file: Recall that in our example we use own cloud mClouds, if you work with another provider, check the values ​​​​with him. 

The contents of the vcd.tfvars file.

vcd_org_url = "https://vcloud.mclouds.ru/api"

vcd_org_user = "orgadmin"

vcd_org_password = "*"

vcd = "org"

vcd_org_vdc = "orgvdc"

vcd_org_maxretry_timeout = 60

vcd_org_allow_unverified_ssl = true

vcd_org_catalog = "Templates"

vcd_templateos_centos7 = "CentOS7"

vcd_org_ssd_sp = "Gold Storage Policy"

vcd_org_hdd_sp = "Bronze Storage Policy"

vcd_org_edge_name = "MCLOUDS-EDGE"

vcd_edge_external_ip = "185.17.66.1"

vcd_edge_local_subnet = "192.168.110.0/24"

vcd_edge_local_ip_nginx = "192.168.110.1"

vcd_edge_local_ip_bitrix = "192.168.110.10"

vcd_edge_local_ip_nextcloud = "192.168.110.11"

vcd_edge_external_network = "NET-185-17-66-0"

Network configuration, network.tf.

The environment variables are set, now let's configure the virtual machine connection scheme - assign a private IP address to each virtual machine and use Destination NAT to "forward" ports to the external network. To restrict access to management ports, we will set access only for our IP address.

How to Manage Cloud Infrastructure with TerraformNetwork diagram for the created Terraform platform

We create a virtual organizational network with the name net_lan01, the default gateway: 192.168.110.254, and also with the address space: 192.168.110.0/24.

Describe the virtual network.

resource "vcd_network_routed" "net" {

  name = "net_lan01"

  edge_gateway = var.vcd_org_edge_name

  gateway = "192.168.110.254"

  dns1 = "1.1.1.1"

  dns2 = "8.8.8.8"

 static_ip_pool {

start_address = "192.168.110.1"

end_address = "192.168.110.253"

  }

}

Let's create rules for the firewall, which allows you to provide virtual machines with access to the Internet. Within this block, all virtual resources in the cloud will have access to the Internet:

We describe the rules for VM access to the Internet.

resource "vcd_nsxv_firewall_rule" "fw_internet_access" {

  edge_gateway   = var.vcdorgedgename

  name = "Internet Access"

  source {

gateway_interfaces = ["internal"]

  }

  destination {

gateway_interfaces = ["external"]

  }

  service {

protocol = "any"

  }

  depends_on = [vcdnetworkrouted.net]

}

Having established the dependency that after processing the vcdnetworkrouted.net block, we proceed to the configuration of the vcdnsxvfirewallrule block, through dependson. We use this option because some dependencies may be implicitly recognized in the configuration.

Next, we will create rules allowing access to ports from the external network and specify our IP address for connecting via SSH to the servers. Any Internet user has access to ports 80 and 443 on the web server, and a user with IP address 90.1.15.1 has access to the SSH ports of the virtual servers.

We allow access to ports from the external network.

resource "vcd_nsxv_firewall_rule" "fwnatports" {

  edge_gateway   = var.vcd_org_edge_name

  name = "HTTPs Access"

  source {

gateway_interfaces = ["external"]

  }

  destination {

  gateway_interfaces = ["internal"]

  }

  service {

protocol = "tcp"

port = "80"

  }

  service {

protocol = "tcp"

port = "443"

  }

  depends_on = [vcd_network_routed.net]

}

resource "vcd_nsxv_firewall_rule" "fw_nat_admin_ports" {

  edge_gateway   = var.vcd_org_edge_name

  name = "Admin Access"

  source {

  ip_addresses = [ "90.1.15.1" ]

  }

  destination {

  gateway_interfaces = ["internal"]

  }

  service {

protocol = "tcp"

port = "58301"

  }

  service {

protocol = "tcp"

port = "58302"

  }

  service {

protocol = "tcp"

port = "58303"

  }

  depends_on = [vcd_network_routed.net]

}

Create Source NAT rules for accessing the Internet from the cloud local area network:

Describe the Source NAT rules.

resource "vcd_nsxv_snat" "snat_local" {

edge_gateway = var.vcd_org_edge_name

  network_type = "ext"

  network_name = var.vcdedgeexternalnetwork

  original_address   = var.vcd_edge_local_subnet

translated_address = var.vcd_edge_external_ip

  depends_on = [vcd_network_routed.net]

}

And at the end of the network block configuration, we add Destination NAT rules for accessing services from an external network:

Adding Destination NAT rules.

resource "vcd_nsxv_dnat" "dnat_tcp_nginx_https" {
edge_gateway = var.vcd_org_edge_name
network_name = var.vcd_edge_external_network
network_type = "ext"

  description = "NGINX HTTPs"

original_address = var.vcd_edge_external_ip
original_port = 443

translated_address = var.vcd_edge_local_ip_nginx
translated_port = 443
protocol = "tcp"

depends_on = [vcd_network_routed.net]
}
resource "vcd_nsxv_dnat" "dnat_tcp_nginx_http" {
edge_gateway = var.vcd_org_edge_name
network_name = var.vcd_edge_external_network
network_type = "ext"

description = "NGINX HTTP"

original_address = var.vcd_edge_external_ip
original_port = 80

translated_address = var.vcd_edge_local_ip_nginx
translated_port = 80
protocol = "tcp"

depends_on = [vcd_network_routed.net]

}

Add a NAT rule to translate ports to the SSH server under Nginx.

resource "vcd_nsxv_dnat" "dnat_tcp-nginx_ssh" {
edge_gateway = var.vcd_org_edge_name
network_name = var.vcd_edge_external_network
network_type = "ext"

description = "SSH NGINX"

original_address = var.vcd_edge_external_ip
original_port = 58301

translated_address = var.vcd_edge_local_ip_nginx
translated_port = 22
protocol = "tcp"

depends_on = [vcd_network_routed.net]

}

We add a NAT rule for port translation to the SSH server with 1C-Bitrix.

resource "vcd_nsxv_dnat" "dnat_tcp_bitrix_ssh" {
edge_gateway = var.vcd_org_edge_name
network_name = var.vcd_edge_external_network
network_type = "ext"

description = "SSH Bitrix"

original_address = var.vcd_edge_external_ip
original_port = 58302

translated_address = var.vcd_edge_local_ip_bitrix
translated_port = 22
protocol = "tcp"

depends_on = [vcd_network_routed.net]

}

Add a NAT rule to translate ports to the SSH server with Nextcloud.

resource "vcd_nsxv_dnat" "dnat_tcp_nextcloud_ssh" {
edge_gateway = var.vcd_org_edge_name
network_name = var.vcd_edge_external_network
network_type = "ext"

description = "SSH Nextcloud"

original_address = var.vcd_edge_external_ip
original_port = 58303 translated_address = var.vcd_edge_local_ip_nextcloud
translated_port = 22
protocol = "tcp"

depends_on = [vcd_network_routed.net]

}

Virtual environment configuration main.tf

As we planned at the beginning of the article, we will create three virtual machines. They will be prepared with "Guest Customization". We will write the network parameters according to the settings we specified, and the password from the user is generated automatically.

Let's describe the vApp in which the virtual machines and their configuration will be located.

How to Manage Cloud Infrastructure with TerraformVirtual machine configuration

Let's create a vApp container. So that we can immediately connect the vApp and VM to the virtual network, we also add the depends_on parameter:

Create a container

resource "vcd_vapp" "vapp" {
name = "web"
power_on = "true" depends_on = [vcd_network_routed.net]

}

Create a virtual machine with a description

resource "vcd_vapp_vm" "nginx" {

vapp_name = vcd_vapp.vapp.name

name = "nginx"

catalog_name = var.vcd_org_catalog

template_name = var.vcd_template_os_centos7

storage_profile = var.vcd_org_ssd_sp

memory = 8192

cpus = 1

cpu_cores = 1

network {

type = "org"

name = vcd_network_routed.net.name

is_primary = true

adapter_type = "VMXNET3"

ip_allocation_mode = "MANUAL"

ip = var.vcd_edge_local_ip_nginx

}

override_template_disk {

bus_type = "paravirtual"

size_in_mb = "32768"

bus_number = 0

unit_number = 0

storage_profile = var.vcd_org_ssd_sp

}

}

The main parameters in the description of the VM:

  • name is the name of the virtual machine,

  • vappname - the name of the vApp to which to add a new VM,

  • catalogname / templatename - catalog name and virtual machine template name,

  • storageprofile - default storage policy.

Network block parameters:

  • type — type of connected network,

  • name - which virtual network to connect the VM to,

  • isprimary - primary network adapter,

  • ipallocation_mode - MANUAL / DHCP / POOL address allocation mode,

  • ip - IP address for the virtual machine, we will specify it manually.

override_template_disk block:

  • sizeinmb - boot disk size for the virtual machine

  • storage_profile - storage policy for the disk

Let's create a second VM with a description of the Nextcloud file storage

resource "vcd_vapp_vm" "nextcloud" {

vapp_name = vcd_vapp.vapp.name

name = "nextcloud"

catalog_name = var.vcd_org_catalog

template_name = var.vcd_template_os_centos7

storage_profile = var.vcd_org_ssd_sp

memory = 8192

cpus = 1

cpu_cores = 1

network {

type = "org"

name = vcd_network_routed.net.name

is_primary = true

adapter_type = "VMXNET3"

ip_allocation_mode = "MANUAL"

ip = var.vcd_edge_local_ip_nextcloud

}

override_template_disk {

bus_type = "paravirtual"

size_in_mb = "32768"

bus_number = 0

unit_number = 0

storage_profile = var.vcd_org_ssd_sp

}

}

resource "vcd_vm_internal_disk" "disk1" {

vapp_name = vcd_vapp.vapp.name

vm_name = "nextcloud"

bus_type = "paravirtual"

size_in_mb = "102400"

bus_number = 0

unit_number = 1

storage_profile = var.vcd_org_hdd_sp

allow_vm_reboot = true

depends_on = [ vcd_vapp_vm.nextcloud ]

}

In the vcdvminternal_disk section, we describe a new virtual disk that is connected to the virtual machine.

Explanations on the vcdvminternaldisk block:

  • bustype - disk controller type

  • sizeinmb - disk size

  • busnumber / unitnumber - connection point in the adapter

  • storage_profile - storage policy for the disk

Let's describe the last VM on Bitrix

resource "vcd_vapp_vm" "bitrix" {

vapp_name = vcd_vapp.vapp.name

name = "bitrix"

catalog_name = var.vcd_org_catalog

template_name = var.vcd_template_os_centos7

storage_profile = var.vcd_org_ssd_sp

memory = 8192

cpus = 1

cpu_cores = 1

network {

type = "org"

name = vcd_network_routed.net.name

is_primary = true

adapter_type = "VMXNET3"

ip_allocation_mode = "MANUAL"

ip = var.vcd_edge_local_ip_bitrix

}

override_template_disk {

bus_type = "paravirtual"

size_in_mb = "81920"

bus_number = 0

unit_number = 0

storage_profile = var.vcd_org_ssd_sp

}

}

OS update and installation of additional scripts

The network is prepared, the virtual machines are described. Before importing our infrastructure, we can pre-provision with provisioners blocks and without using Ansible.

Let's consider how to update the OS and run the Bitrix CMS installation script using the provisioner block.

Let's install the CentOS service packs first.

resource "null_resource" "nginx_update_install" {

provisioner "remote-exec" {

connection {

type = "ssh"

user = "root"

password = vcd_vapp_vm.nginx.customization[0].admin_password

host = var.vcd_edge_external_ip

port = "58301"

timeout = "30s"

}

inline = [

"yum -y update && yum -y upgrade",

"yum -y install wget nano epel-release net-tools unzip zip" ]

}

}

}

Designation of components:

  • provisioner "remote-exec" - connect the remote "provisioning" block

  • In the connection block, we describe the type and parameters for the connection:

  • type - protocol, in our case SSH;

  • user - username;

  • password — user's password. In our case, we point to the vcdvappvm.nginx.customization[0].admin_password parameter, which stores the generated password from the system user.

  • host — external IP address for connection;

  • port - port for connection, which was previously specified in the DNAT settings;

  • inline - list the list of commands that will be entered. The commands will be entered in order, as specified in this section.

As an example, let's additionally execute the 1C-Bitrix installation script. The output of the script execution result will be available during the execution of the plan. To install the script, we first describe the block:

Let's describe the installation of 1C-Bitrix.

provisioner "file" {

source = "prepare.sh"

destination = "/tmp/prepare.sh"

connection {

type = "ssh"

user = "root"

password = vcd_vapp_vm.nginx.customization[0].admin_password

host = var.vcd_edge_external_ip

port = "58301"

timeout = "30s"

}

}

provisioner "remote-exec" {

inline = [

"chmod +x /tmp/prepare.sh", "./tmp/prepare.sh"

]

}

And we will immediately describe the Bitrix update.

An example of 1C-Bitrix provisioning.

resource "null_resource" "install_update_bitrix" {

provisioner "remote-exec" {

connection {

type = "ssh"

user = "root"

password = vcd_vapp_vm.bitrix.customization[0].admin_password

host = var.vcd_edge_external_ip

port = "58302"

timeout = "60s"

}

inline = [

"yum -y update && yum -y upgrade",

"yum -y install wget nano epel-release net-tools unzip zip",

"wget http://repos.1c-bitrix.ru/yum/bitrix-env.sh -O /tmp/bitrix-env.sh",

"chmod +x /tmp/bitrix-env.sh",

"/tmp/bitrix-env.sh"

]

}

}

Important! The script may not work unless SELinux is disabled beforehand! If you need a detailed article on installing and configuring CMS 1C-Bitrix using bitrix-env.sh, you can use our blog article on the website.

3. Infrastructure initialization

How to Manage Cloud Infrastructure with TerraformInitialization of modules and plugins

For work, we use a simple “gentleman's set”: a laptop with Windows 10 and a distribution kit from the official website terraform.io. Unpack and initialize with the command: terraform.exe init

After describing the computing and network infrastructure, we start planning to test our configuration, where we can see what will be created and how it is connected to each other.

  1. Execute the command - terraform plan -var-file=vcd.tfvars.

  2. We get the result - Plan: 16 to add, 0 to change, 0 to destroy. That is, according to this plan, 16 resources will be created.

  3. Launching the plan on command - terraform.exe apply -var-file=vcd.tfvars.

Virtual machines will be created, and then the packages listed by us will be executed within the provisioner section - the OS will be updated and CMS Bitrix will be installed.

Getting connection data

After the execution of the plan, we want to receive data in text form for connecting to servers, for this we will arrange the output section as follows:

output "nginxpassword" {

 value = vcdvappvm.nginx.customization[0].adminpassword

}

And the following output tells us the password from the created virtual machine:

Outputs: nginx_password = F#4u8!!N

As a result, we get access to virtual machines with an updated operating system and pre-installed packages for our further work. All is ready!

But what if you already have an existing infrastructure?

3.1. Terraform working with existing infrastructure

It's simple, you can import current virtual machines and their vApp containers using the import command.

Let's describe the vAPP resource and the virtual machine.

resource "vcd_vapp" "Monitoring" {

name = "Monitoring"

org = "mClouds"

vdc = "mClouds"

}

resource "vcd_vapp_vm" "Zabbix" {

name = "Zabbix"

org = "mClouds"

vdc = "mClouds"

vapp = "Monitoring"

}

The next step is to import vApp resource properties in the format vcdvapp.<vApp> <org>.<orgvdc>.<vApp>, where:

  • vApp is the name of the vApp;

  • org is the name of the organization;

  • org_vdc is the name of the virtual data center.

How to Manage Cloud Infrastructure with TerraformImport vAPP resource properties

Let's import the properties of VM resources in the format: vcdvappvm.<VM> <org>.<orgvdc>.<vApp>.<VM>, wherein:

  • VM - VM name;

  • vApp is the name of the vApp;

  • org is the name of the organization;

  • orgvdc is the name of the virtual data center.

Import was successful

C:UsersMikhailDesktopterraform>terraform import vcd_vapp_vm.Zabbix mClouds.mClouds.Monitoring.Zabbix

vcd_vapp_vm.Zabbix: Importing from ID "mClouds.mClouds.Monitoring.Zabbix"...

vcd_vapp_vm.Zabbix: Import prepared!

Prepared vcd_vapp_vm for import

vcd_vapp_vm.Zabbix: Refreshing state... [id=urn:vcloud:vm:778f4a89-1c8d-45b9-9d94-0472a71c4d1f]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Now we can look at the newly imported resource:

Imported resource

> terraform show

...

# vcd_vapp.Monitoring:

resource "vcd_vapp" "Monitoring" {

guest_properties = {}

href = "https://vcloud.mclouds.ru/api/vApp/vapp-fe5db285-a4af-47c4-93e8-55df92f006ec"

id = "urn:vcloud:vapp:fe5db285-a4af-47c4-93e8-55df92f006ec"

ip = "allocated"

metadata = {}

name = "Monitoring"

org = "mClouds"

status = 4

status_text = "POWERED_ON"

vdc = "mClouds"

}

# vcd_vapp_vm.Zabbix:

resource "vcd_vapp_vm" "Zabbix" {

computer_name = "Zabbix"

cpu_cores = 1

cpus = 2

expose_hardware_virtualization = false

guest_properties = {}

hardware_version = "vmx-14"

href = "https://vcloud.mclouds.ru/api/vApp/vm-778f4a89-1c8d-45b9-9d94-0472a71c4d1f"

id = "urn:vcloud:vm:778f4a89-1c8d-45b9-9d94-0472a71c4d1f"

internal_disk = [

{

bus_number = 0

bus_type = "paravirtual"

disk_id = "2000"

iops = 0

size_in_mb = 122880

storage_profile = "Gold Storage Policy"

thin_provisioned = true

unit_number = 0

},

]

memory = 8192

metadata = {}

name = "Zabbix"

org = "mClouds"

os_type = "centos8_64Guest"

storage_profile = "Gold Storage Policy"

vapp_name = "Monitoring"

vdc = "mClouds"

customization {

allow_local_admin_password = true

auto_generate_password = true

change_sid = false

enabled = false

force = false

join_domain = false

join_org_domain = false

must_change_password_on_first_login = false

number_of_auto_logons = 0

}

network {

adapter_type = "VMXNET3"

ip_allocation_mode = "DHCP"

is_primary = true

mac = "00:50:56:07:01:b1"

name = "MCLOUDS-LAN01"

type = "org"

}

}

Now we are definitely ready - we have finished with the last moment (importing into an existing infrastructure) and have considered all the main points of working with Terraform. 

The tool turned out to be very convenient and allows you to describe your infrastructure as code, ranging from virtual machines of one cloud provider to describing the resources of network components.

At the same time, independence from the environment makes it possible to work with local, cloud resources, and, ending with platform management. And if there is no supported platform and you want to add a new one, you can write your own provider and use it.

Source: habr.com

Add a comment