Setup clustered LVM as shared VM storage for KVM
This HowTo is about how to set up a shared storage space for VM virtual disks using clustered LVM (CLVMD). CLVMD (Clustered LVM Daemon) is a service which runs on top of regular LVM. It is using the "CMAN" HA-framework for the cluster communications. This howto explains the configuration of 2 (or more) additional bare-metal systems used as KVM virtualization hosts which are configured with a shared clustered LVM volume group. For this howto we are using the CentOS Linux distribution for the additional systems.
Requirements
- Three physical Server. Alternatively openQRM itself can be installed within a Virtual Machine
- at least 1 GB of Memory
- at least 100 GB of Diskspace
- VT (Virtualization Technology) enabled in the Systems BIOS for the Xen Host system so it can run HVM Virtual Machines later
Install openQRM on Debian
Install a minimal Debian on a physical Server. Then install and initialize openQRM 5.1.
A detailed Howto about the above initial starting point is available at Install openQRM on Debian. For this howto we have used the same openQRM server as for the howtos about Virtualization with KVM and openQRM on Debian howto as starting point.
Install CentOS on the KVM Hosts
As the first please install latest CentOS on two (or more) physical systems dedicated for the KVM virtualization hosts. Partition the disk as you like and use a minimal package installation. openQRM will later automatially fetch all futher needed package dependencies. Please configure both systems with a static IP address from the openQRM management network. After the installation reboot and ssh into the 2 systems.
For this howto we are going to use "clvmkvm1" as hostname with the IP address 192.168.178.135 for the first system and "clvmkvm2" as hostname with the IP address 192.168.178.136 for the second system.
- Please notice!
After the CentOS installation please make sure to enable the additional "rpmforge" and "epel" package repositories. Please check http://wiki.centos.org/AdditionalResources/Repositories/RPMForge and http://www.rackspace.com/knowledge_center/article/installing-rhel-epel-repo-on-centos-5x-or-6x how to enable those 2 package repositories for CentOS.
Post configuration on the CentOS KVM Hosts
As the first step after the OS installation please configure one (or more) bridges. On both systems please run:
yum install bridge-utils
to install the bridge-utils.
In /etc/sysconfig/network-scripts/ please adapt ifcfg-eth0 as following:
# Interface for bridge br0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=br0
Then create an ifcfg-br0 as following (here for example on the first system):
DEVICE=br0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.88.135
NETMASK=255.255.255.0
GATEWAY=192.168.88.1
DELAY=0
HELLO=2
MAXAGE=12
STP=no
TYPE=Bridge
- Please notice!
For the second system please use IPADDR=192.168.88.136
After that please reboot both systems to activate the new network configuration. It may be sufficiant to restart the network service instead of rebooting.
- Please notice!
If you plan to attach further external networks to your VMs please setup bridges to the external network(s) in the same way as for the openQRM management network described above.
Now please configure /etc/hosts on both systems and add the following lines:
192.168.88.135 clvmkvm1
192.168.88.136 clvmkvm2
This is to make sure the clustered systems know each other by hostname.
- Please notice!
Please disable SELinux by adapting the /etc/selinux/config to:
SELINUX=disabled
Further make sure to also disable the iptables and ip6tables firewall by running:
chkconfig --del iptables
chkconfig --del ip6tables
Now please reboot the 2 systems to deactivate selinux and the firewall.
To recheck your SELinux configuration is really disabled please run:
selinuxenabled
echo $?
It should return "1" now (disabled).
To recheck your iptables firewall is really disabled please run:
iptables -L
Connect the SAN/iSCSI-target to both systems
The configuration details for connecting a SAN and/or iSCSI device can be different in your setup! For this howto we are using a simple iSCSI-target created with "ietd" without any CHAP authentication on the openQRM server system. The easiest way is to use the "iscsi-storage" plugin in openQRM to create such an iSCSI target.
On both systems please install the "iscsi-initiator-utils".
yum install iscsi-initiator-utils
Now discover the iSCSI-target.
[root@clvmkvm1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.178.5:3260
192.168.178.5:3260,1 clvmsan 192.168.178.5
The iSCSI discovery action provides the exact target name and record to use for the iSCSI target login. In our example the target name is "clvmsan":
iscsiadm -m node -T clvmsan -p 192.168.178.5:3260 --login
Now the iSCSI target is connected and appears in /proc/partitions as new device "sdb"
[root@clvmkvm1 ~]# cat /proc/partitions
major minor #blocks name
8 0 312571224 sda
8 1 512000 sda1
8 2 307200000 sda2
8 3 204800 sda3
8 16 51200000 sdb
253 0 2048000 dm-0
[root@clvmkvm1 ~]#
Please create a partition on the iSCSI target device using fdisk. Set the type of the partition to "8e" (LVM). It should look similar to the following fdisk output:
[root@clvmkvm1 ~]# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 52.4 GB, 52428800000 bytes
64 heads, 32 sectors/track, 50000 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa60f79fa
Device Boot Start End Blocks Id System
/dev/sdb1 1 50000 51199984 8e Linux LVM
Command (m for help):
Now please setup the new partition as physical device for LVM and create a LVM volume group. For this howto we have used "clvmsan" for the LVM volume group name:
pvcreate /dev/sdb1
vgcreate clvmsan /dev/sdb1
The new created LVM volume group will now appear on both systems but it is not yet clustered.
Adapt the LVM configuration for clustering
On both systems please edit /etc/lvm/lvm.conf and set the following parameters:
locking_type = 3
fallback_to_clustered_locking = 0
fallback_to_local_locking = 0
After that all LVM commands will fail because the underlaying LVM cluster is not yet started.
Setup the LVM cluster
On both systems please install the "lvm2-cluster" and "cman" packages.
yum install lvm2-cluster cman
To configure the cluster topology please create a /etc/cluster/cluster.conf configuration file equal on both nodes as the following:
<cluster name="clvmkvm" config_version="1">
<cman two_node="1" expected_votes="1"/>
<clusternodes>
<clusternode name="clvmkvm1" nodeid="1">
</clusternode>
<clusternode name="clvmkvm2" nodeid="2">
</clusternode>
</clusternodes>
</cluster>
- Please notice!
Since a 2 node cluster cannot form a quorum on behalf of 2 votes please make sure to have second line configured as in the above example. Cluster with 3 or more nodes do not need this special configuration option.
Now please restart the "cman" service on both systems to activate the cluster:
/etc/init.d/cman restart
To check the cluster status please use the "cman_tool":
[root@clvmkvm1 ~]# cman_tool nodes
Node Sts Inc Joined Name
1 M 312 2014-02-20 11:14:16 clvmkvm1
2 M 324 2014-02-20 11:15:45 clvmkvm2
[root@clvmkvm1 ~]#
Now please restart the "clvmd" service on both systems to activate the clustered volume group:
/etc/init.d/clvmd restart
Running the regular "vgs" command shows the clustered volume group. Please notice the 'c' attributes which stands for "clusterd"
[root@clvmkvm1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
clvmsan 1 1 0 wz--nc 48.82g 46.87g
[root@clvmkvm1 ~]#
For managing the now clustered LVM volume the regular LVM commands can be used. The CLVMD service will make sure to use a clustered locking mechanism and to distribute the LVM meta data across the cluster nodes.
Integrate the VM Hosts into openQRM
Copy (scp) the openqrm-local-server integration tool from the openQRM server to the 2 systems dedicated for the VM Hosts:
scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.135:
scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.136:
Then login (ssh) to each system and run the openqrm-local-server tool with the 'integrate' parameter. On the first system:
./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http
and on the second system:
./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm2 -i br0 -s http
Here how it looks like on the first system terminal console:
root@clvmkvm1:~# chmod +x openqrm-local-server
root@clvmkvm1:~# ./openqrm-local-server
Usage : ./openqrm-local-server integrate -u -p -q [ -n ] [-i ] [-s ]
./openqrm-local-server remove -u -p -q [ -n ] [-s ]
root@clvmkvm11:~# ./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http
Integrating system to openQRM-server at 192.168.178.5
-> could not find dropbear. Trying to automatically install it ...
Reading package lists... Done
Building dependency tree
........... (more output and automatic package installation)
root@clvmkvm1:~#
The "local-server" integration automatically adds the 2 systems to openQRM and creates for a each system a "server" object. Please edit the "server" object of both systems and set the "Virtualization" type to "KVM Host". This will automatically create 2 new "storage" objects in openQRM which can be used to manage the clustered LVM VM disk space.
- Please notice!
Please notice that LVM snapshots are NOT supported for clustered LVM! In case you are using openQRM IaaS cloud please adapt the following cloud-deployment hook for KVM. Edit:
/usr/share/openqrm/plugins/kvm/web/openqrm-kvm-lvm-deployment-cloud-hook.php
and change the line:
$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm snap -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;
to
$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm clone -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;
This configures the openQRM IaaS Cloud to use the "clone" command instead of "snap".
- Please notice!
In the upcoming openQRM Enterprise release the "clone" action for KVM got enhanced to allow automatic resizing from small master images to bigger volumes for e.g. Cloud deployment. A new "Image cache" feature in 5.1.2 provides the capability to locally cache master images and efficiently clone them from the local cache.
We hope you have enjoyed this openQRM howto sponsored by openQRM Enterprise!