Configuring DRBD to replicate storage to two CentOS 7 servers

The translation of the article was prepared on the eve of the start of the course Linux administrator. Virtualization and ClusteringΒ».

DRBD (Distributed Replicated Block Device) is a distributed, flexible and universally replicated storage solution for Linux. It reflects the contents of block devices such as hard drives, partitions, logical volumes, and so on. between servers. It creates copies of the data on two storage devices so that if one fails, the data on the other can be used.

You could say it's something like network RAID configuration 1 with disks mapped to different servers. However, it does not work at all like RAID (even network).

Initially, DRBD was used mainly in high availability (HA) computer clusters, however, starting from the ninth version, it can be used to deploy cloud storage solutions.

In this article, we will explain how to install DRBD on CentOS and briefly demonstrate how to use it to replicate a storage (partition) on two servers. This is the perfect article to get started with DRBD on Linux.

Test environment

We will be using a two node cluster for this setup.

  • Node 1: 192.168.56.101 - tecmint.tecmint.lan
  • Node 2: 192.168.56.102 - server1.tecmint.lan

Step 1: Installing the DRBD Packages

DRBD is implemented as a Linux kernel module. It is a driver for a virtual block device, so it is located at the very bottom of the system's I / O stack.

DRBD can be installed from ELRepo or EPEL. Let's start by importing the ELRepo package signing key and enabling the repository on both nodes as shown below.

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

Then you need to install the DRBD kernel module and utilities on both nodes with:

# yum install -y kmod-drbd84 drbd84-utils

If you have SELinux enabled, you need to set policies to release DRBD processes from SELinux control.

# semanage permissive -a drbd_t

Also, if your system is running a firewall (firewalld), you need to add the DRBD port 7789 to allow data synchronization between the two nodes.

Run these commands for the first node:

# firewall-cmd --permanent --add-rich-rule='rule family="ipv4"  source address="192.168.56.102" port port="7789" protocol="tcp" accept'
# firewall-cmd --reload

Then run these commands for the second node:

# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.101" port port="7789" protocol="tcp" accept'
# firewall-cmd --reload

Step 2. Prepare low-level storage

Now that we have DRBD installed on both cluster nodes, we need to provision storage areas of approximately the same size on them. This could be a hard drive partition (or an entire physical hard drive), a software RAID device, LVM logical volume or any other block device type found on your system.

For this article, we will create a 2 GB test block device using the dd command.

# dd if=/dev/zero of=/dev/sdb1 bs=2024k count=1024

Let's assume it's an unused partition (/dev/sdb1) on a second block device (/dev/sdb) attached to both nodes.

Step 3Configure DRBD

The main DRBD configuration file is βˆ’ /etc/drbd.conf, and additional configuration files can be found in the directory /etc/drbd.d.

To replicate storage, we need to add the necessary configurations for this to the file /etc/drbd.d/global_common.conf, which contains the global and general sections of the DRBD configuration, and we need to define resources in .res files.

Let's make a backup copy of the original file on both nodes, and then open a new file for editing (use a text editor of your choice).

# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
# vim /etc/drbd.d/global_common.conf 

Add the following lines to both files:

global {
 usage-count  yes;
}
common {
 net {
  protocol C;
 }
}

Save the file and then close the editor.

Let's briefly dwell on the protocol C line. DRBD supports three different modes of replication (i.e. three degrees of replication synchronism), namely:

  • protocol A: asynchronous replication protocol; most commonly used in long distance replication scenarios.
  • protocol B: semi-synchronous replication protocol or synchronous memory protocol.
  • protocol C: typically used for nodes in networks with short distances; this is by far the most commonly used replication protocol in DRBD setups.

It's important: The choice of replication protocol affects two deployment factors: security and latency. On the contrary, throughput does not depend to a large extent on the chosen replication protocol.

Step 4: Adding a Resource

Resource is a collective term that refers to all aspects of a particular replicated dataset. We will define our resource in a file /etc/drbd.d/test.res.

Add the following to the file on both nodes (don't forget to replace the variables with the actual values ​​for your environment).

Pay attention to the hostnames, we need to specify the network hostname, which can be obtained using the uname command -n.

resource test {
        on tecmint.tecmint.lan {
 		device /dev/drbd0;
       		disk /dev/sdb1;
        		meta-disk internal;	
                	address 192.168.56.101:7789;
        }
        on server1.tecmint.lan  {
		device /dev/drbd0;
        		disk /dev/sdb1;
        		meta-disk internal;
                	address 192.168.56.102:7789;
        }
}
}

where:

  • on hostname: The on section to which the nested configuration statement belongs.
  • test: This is the name of the new resource.
  • device /dev/drbd0: Specifies the new virtual block device managed by DRBD.
  • disk /dev/sdb1: This is a block device partition that is a backup device for a DRBD device.
  • meta disk: defines where the DRBD stores its metadata. Internal means that DRBD stores its metadata on the same physical low-level device as the actual production data.
  • address: Specifies the IP address and port number of the corresponding host.

Also note that if the parameters have the same values ​​on both hosts, you can specify them directly in the resources section.

For example, the configuration above could be restructured to:

resource test {
	device /dev/drbd0;
	disk /dev/sdb1;
        	meta-disk internal;	
        	on tecmint.tecmint.lan {
 		address 192.168.56.101:7789;
        	}
        	on server1.tecmint.lan  {
		address 192.168.56.102:7789;
        		}
}

Step 5. Initialize and start the resource

To interact with DRBD, we will use the following administration tools (which interact with the kernel module to configure and administer DRBD resources):

  • drbdadm: DRBD high level administration tool.
  • drbdsetup: A lower-level administration tool for connecting DRBD devices to their backup devices, configuring pairs of DRBD devices to reflect their backup devices, and for checking the configuration of running DRBD devices.
  • Drbdmeta: metadata management tool.

After adding all the initial resource configurations, we need to call the resource on both nodes.

# drbdadm create-md test

Configuring DRBD to replicate storage to two CentOS 7 servers
Initializing the Metadata Store

Next, we need to run it, which will mount the resource to its backup device, then set the replication options and mount the resource to its peer:

# drbdadm up test

Now if you run the command lsblk, you will notice that the DRBD device/volume drbd0 is associated with the standby /dev/sdb1:

# lsblk

Configuring DRBD to replicate storage to two CentOS 7 servers
List of block devices

To disable a resource, run:

# drbdadm down test

To check the status of the resource, run the following command (note that the status of the disks is expected at this stage Inconsistent/Inconsistent):

# drbdadm status test
OR
# drbdsetup status test --verbose --statistics 	#for  a more detailed status

Configuring DRBD to replicate storage to two CentOS 7 servers
Checking the status of a resource on
evils

Step 6: Set Primary Resource/Source of Initial Device Sync

At this stage, the DRBD is ready to go. Now we need to specify which node should be used as the source of the initial device synchronization.

Run the following command on only one node to start the initial full sync:

# drbdadm primary --force test
# drbdadm status test

Configuring DRBD to replicate storage to two CentOS 7 servers
Setting the Primary Node as the Starting Device
After synchronization is complete, the status of both drives should be UpToDate.

Step 7: Testing the DRBD Setup

Finally, we need to check if the DRBD device will work as it should to store the replicated data. Remember that we have used an empty disk volume, so we must create a file system on the device and mount it to see if we can use it to store replicated data.

We need to create a file system on the device with the following command on the host where we started the initial full sync (which has a primary role resource):

# mkfs -t ext4 /dev/drbd0

Configuring DRBD to replicate storage to two CentOS 7 servers
Create a file system on a Drbd volume

Then mount it as shown (you can give the mount point an appropriate name):

# mkdir -p /mnt/DRDB_PRI/
# mount /dev/drbd0 /mnt/DRDB_PRI/

Now copy or create some files on the above mount point and make a long list with ls commands:

# cd /mnt/DRDB_PRI/
# ls -l 

Configuring DRBD to replicate storage to two CentOS 7 servers
List the contents of the main Drbd volume

Next, unmount the device (make sure no mount is open, change directory after unmount to avoid errors) and change host role from primary to secondary:

# umount /mnt/DRDB_PRI/
# cd
# drbdadm secondary test

Make the other node (which has a resource with a secondary role) primary, then attach a device to it and perform a long list of mount points. If the setup works fine, all files stored on the volume should be there:

# drbdadm primary test
# mkdir -p /mnt/DRDB_SEC/
# mount /dev/drbd0 /mnt/DRDB_SEC/
# cd /mnt/DRDB_SEC/
# ls  -l 

Configuring DRBD to replicate storage to two CentOS 7 servers
Checking the DRBD setup running on the secondary node.

For more information, see the help pages for the administration tools:

# man drbdadm
# man drbdsetup
# man drbdmeta

Information: DRBD User Manual.

Summary

DRBD is extremely flexible and versatile, making it a storage replication solution suitable for adding HA to almost any application. In this article, we showed you how to install DRBD on CentOS 7 and briefly demonstrated how to use it for storage replication. Feel free to share your thoughts with us using the feedback form below.

Learn more about the course.

Source: habr.com

Add a comment