Create a 3 Node DRBD 9 Cluster Using DRBD Manage
There are many new features that you can make use of when building High Availability Clusters with DRBD 9 and DRBD Manage is one of the great new tools that you can use. In this blog we show how to create a 3 Node DRBD 9 cluster using DRBD Manage. We will also be able to the use of auto-promote whereby a node can be automatically promoted to the Primary when they access a resource. I will be using DRBD 9 from the LINBIT repositories as many distributions still ship with DRBD 8 which does not support Auto-Promote and DRBD Manage.
- Ensure all nodes have accurate time with NTP
- Ensure all nodes have resolvable host names
- The LINBIT Repositories are created on each node
- SSH Public Key Authentication between one node and all others
- For the PURPOSES of the lab demo we have disabled the host-based firewall and set SELinux to permissive
On each system we need to install DRBD
# yum install -y drbd drbdmanage kmod-drbd
The package drbdmanage is the cool new tool that really simplifies management of your DRBD Cluster.
We also need to create a LVM Volume Group on each host. DRBD Manage can manage the Cluster and underlying block devices. This saves a lot of work but we do need to create the initial Volume Group on each node. We use the default volume group name that is set in the file /etc/drbdmanaged.conf
# vgcreate drbdpool
Initialize the Cluster
This needs to be preformed on just one node and will create the Control Logical Volumes that has the configuration of the Cluster. It will also enroll the node that it is run upon into the Cluster.
# drbdmanage init <ip address of the interface that the cluster should be created on>
In the video we issue the command:
# drbdmanage init 192.168.56.11
We can view the nodes in the cluster with:
# drbdmanage list-nodes
Enroll Nodes to the Cluster
We now need to add in additional nodes to the cluster. We run the commands on the host that has SSH Public Key Authentication access to the other nodes.
The format of the command is as follows:
drbdmanage add-node <node name> <node ip address>
We issues the following commands in the demo:
# drbdmanage add-node alice 192.168.56.12 # drbdmanage add-node dimitry 192.168.56.13
We should now be able see all 3 nodes from the output of:
# drbdmanage list-nodes
Add Cluster Resource
We now have shown you how to create a 3 Node DRBD 9 Cluster using DRBD Manage but we also need to add some resources. This becomes very simple using DRBD 9 and DRBD Manage as we need to be less concerned about the underlying Logical Volumes and DRBD Manage will create them. We will create Cluster Resource Web and a corresponding LV WEB and deploy the resource to the 3 nodes.
# drbdmanage add-resource web # drbdmanage add-volume web 200MB # drbdmanage deploy-resource web 3
It is as simple as that. We write to the control volumes which are replicated to the nodes and each node then creates the logical volume. However we can simplify this further by condensing this as a single command:
# drbdmanage add-volume web 200MB --deploy 3
Format and use the Volume
If we use the command:
# drbdmanage list-volumes
The output will show the volume details that we have. We can use the Minor Number from a volume as the device that we need to format. For example a Minor Number of 100 will relate to the DRBD Device /dev/drbd100
# mkfs.xfs /dev/drbd100
We only need to format this on one node and it will be replicated to the other nodes. We also do not need to promote the nose we work on to primary as this is handled automatically.
Now that it is formatted we can mount it on a node and add data. The data is replicated immediately to other nodes so we have the high availability of the data that we require; and no shared storage to go wrong or be very costly.
The video steps you though the process where you will be able see clearly how to create a 3 node DRBD 9 Cluster using DRBD Manage.