Difference between revisions of "Setup clustered LVM as shared VM storage for KVM"

From openQRM
 
Line 4: Line 4:
 
[[Category:VM]]
 
[[Category:VM]]
 
[[Category:KVM]]
 
[[Category:KVM]]
This HowTo is about how to set up a shared storage space for VM virtual disks using clustered LVM (CLVMD). CLVMD (Clustered LVM Daemon) is a service which runs on top of regular LVM. It is using the "CMAN" HA-framework for the cluster communications. This howto explains the configuration of 2 (or more) additional bare-metal systems used as KVM virtualization hosts which are configured with a shared clustered LVM volume group. For this howto we are using the CentOS Linux distribution for the additional systems.
+
This How-To explains how to set up a shared storage space for VM virtual disks using clustered LVM (CLVMD). CLVMD (Clustered LVM Daemon) is a service which runs on top of regular LVM. It is using the "CMAN" HA-framework for the cluster communications. This How-To explains the configuration of 2 (or more) additional bare-metal systems used as KVM virtualization hosts which are configured with a shared clustered LVM volume group. For this How-To we are using the CentOS Linux distribution for the additional systems.
  
 
'''Requirements'''
 
'''Requirements'''
Line 12: Line 12:
 
* at least 100 GB of Diskspace
 
* at least 100 GB of Diskspace
 
* VT (Virtualization Technology) enabled in the Systems BIOS for the Xen Host system so it can run HVM Virtual Machines later
 
* VT (Virtualization Technology) enabled in the Systems BIOS for the Xen Host system so it can run HVM Virtual Machines later
 +
* Minimal Debian installation on a physical server.
 +
* openQRM 5.1 or later, that has been initialised.
 +
'''NOTE:''' For this How-To, we will be using the same openQRM server used in the [[Virtualisation with KVM and openQRM on Debian]] How-To, so it is recommended to complete that How-To before following this one. If you do not have openQRM installed yet, follow this How-To: [[Install openQRM on Debian]].
  
= Install openQRM on Debian =
+
=== Install CentOS on the KVM Hosts ===
Install a minimal Debian on a physical Server. Then install and initialize openQRM 5.1.
+
Install the latest CentOS on two (or more) physical systems dedicated for the KVM virtualization hosts. Partition the disk as you like and use a minimal package installation. openQRM will later automatically fetch all further needed package dependencies. Please configure both systems with a static IP address from the openQRM management network. After the installation reboot and SSH into the 2 systems.
  
A detailed Howto about the above initial starting point is available at Install openQRM on Debian. For this howto we have used the same openQRM server as for the howtos about Virtualization with KVM and openQRM on Debian howto as starting point.
+
For this How-To we are going to use "clvmkvm1" as hostname with the IP address 192.168.178.135 for the first system and "clvmkvm2" as hostname with the IP address 192.168.178.136 for the second system.
 
 
= Install CentOS on the KVM Hosts =
 
As the first please install latest CentOS on two (or more) physical systems dedicated for the KVM virtualization hosts. Partition the disk as you like and use a minimal package installation. openQRM will later automatially fetch all futher needed package dependencies. Please configure both systems with a static IP address from the openQRM management network. After the installation reboot and ssh into the 2 systems.
 
 
 
For this howto we are going to use "clvmkvm1" as hostname with the IP address 192.168.178.135 for the first system and "clvmkvm2" as hostname with the IP address 192.168.178.136 for the second system.
 
  
 
* Please notice!
 
* Please notice!
  
After the CentOS installation please make sure to enable the additional "rpmforge" and "epel" package repositories. Please check <nowiki>http://wiki.centos.org/AdditionalResources/Repositories/RPMForge</nowiki> and <nowiki>http://www.rackspace.com/knowledge_center/article/installing-rhel-epel-repo-on-centos-5x-or-6x</nowiki> how to enable those 2 package repositories for CentOS.
+
After the CentOS installation, make sure to enable the additional "rpmforge" and "epel" package repositories. Please check <nowiki>http://wiki.centos.org/AdditionalResources/Repositories/RPMForge</nowiki> and <nowiki>http://www.rackspace.com/knowledge_center/article/installing-rhel-epel-repo-on-centos-5x-or-6x</nowiki> how to enable those 2 package repositories for CentOS.
  
= Post configuration on the CentOS KVM Hosts =
+
=== Post configuration on the CentOS KVM Hosts ===
 
As the first step after the OS installation please configure one (or more) bridges. On both systems please run:
 
As the first step after the OS installation please configure one (or more) bridges. On both systems please run:
 
  yum install bridge-utils
 
  yum install bridge-utils
Line 33: Line 31:
  
 
In /etc/sysconfig/network-scripts/ please adapt ifcfg-eth0 as following:
 
In /etc/sysconfig/network-scripts/ please adapt ifcfg-eth0 as following:
# Interface for bridge br0
 
  
DEVICE=eth0
+
<code># Interface for bridge br0</code>
 +
 
 +
<code>DEVICE=eth0</code>
 +
 
 +
<code>ONBOOT=yes</code>
  
ONBOOT=yes
+
<code>TYPE=Ethernet</code>
  
TYPE=Ethernet
+
<code>IPV6INIT=no</code>
  
IPV6INIT=no
+
<code>USERCTL=no</code>
  
USERCTL=no
+
<code>BRIDGE=br0</code>
  
BRIDGE=br0
 
 
Then create an ifcfg-br0 as following (here for example on the first system):
 
Then create an ifcfg-br0 as following (here for example on the first system):
DEVICE=br0
 
  
BOOTPROTO=static
+
<code>DEVICE=br0</code>
 +
 
 +
<code>BOOTPROTO=static</code>
  
ONBOOT=yes
+
<code>ONBOOT=yes</code>
  
IPADDR=192.168.88.135
+
<code>IPADDR=192.168.88.135</code>
  
NETMASK=255.255.255.0
+
<code>NETMASK=255.255.255.0</code>
  
GATEWAY=192.168.88.1
+
<code>GATEWAY=192.168.88.1</code>
  
DELAY=0
+
<code>DELAY=0</code>
  
HELLO=2
+
<code>HELLO=2</code>
  
MAXAGE=12
+
<code>MAXAGE=12</code>
  
STP=no
+
<code>STP=no</code>
  
TYPE=Bridge
+
<code>TYPE=Bridge</code>
  
* Please notice!
+
'''NOTE:'''
  
 
For the second system please use IPADDR=192.168.88.136
 
For the second system please use IPADDR=192.168.88.136
  
After that please reboot both systems to activate the new network configuration. It may be sufficiant to restart the network service instead of rebooting.
+
After that, reboot both systems to activate the new network configuration. It may be sufficient to restart the network service instead of rebooting.
 
 
* Please notice!
 
  
 
If you plan to attach further external networks to your VMs please setup bridges to the external network(s) in the same way as for the openQRM management network described above.
 
If you plan to attach further external networks to your VMs please setup bridges to the external network(s) in the same way as for the openQRM management network described above.
  
 
Now please configure /etc/hosts on both systems and add the following lines:
 
Now please configure /etc/hosts on both systems and add the following lines:
192.168.88.135    clvmkvm1
 
  
192.168.88.136  clvmkvm2
+
<code>192.168.88.135    clvmkvm1</code>
 +
 
 +
<code>192.168.88.136  clvmkvm2</code>
 +
 
 
This is to make sure the clustered systems know each other by hostname.
 
This is to make sure the clustered systems know each other by hostname.
  
* Please notice!
+
'''NOTE:'''
  
 
Please disable SELinux by adapting the /etc/selinux/config to:
 
Please disable SELinux by adapting the /etc/selinux/config to:
SELINUX=disabled
+
 
 +
<code>SELINUX=disabled</code>
 +
 
 
Further make sure to also disable the iptables and ip6tables firewall by running:
 
Further make sure to also disable the iptables and ip6tables firewall by running:
  chkconfig --del iptables
 
  
  chkconfig --del ip6tables  
+
<code>chkconfig --del iptables</code>
 +
 
 +
<code>chkconfig --del ip6tables</code>
 +
 
 
Now please reboot the 2 systems to deactivate selinux and the firewall.
 
Now please reboot the 2 systems to deactivate selinux and the firewall.
  
Line 104: Line 109:
 
   iptables -L  
 
   iptables -L  
  
= Connect the SAN/iSCSI-target to both systems =
+
=== Connect the SAN/iSCSI-target to both systems ===
The configuration details for connecting a SAN and/or iSCSI device can be different in your setup! For this howto we are using a simple iSCSI-target created with "ietd" without any CHAP authentication on the openQRM server system. The easiest way is to use the "iscsi-storage" plugin in openQRM to create such an iSCSI target.
+
The configuration details for connecting a SAN and/or iSCSI device can be different in your setup! For this How-To we are using a simple iSCSI-target created with "ietd" without any CHAP authentication on the openQRM server system. The easiest way is to use the "iscsi-storage" plugin in openQRM to create such an iSCSI target.
  
 
On both systems please install the "iscsi-initiator-utils".
 
On both systems please install the "iscsi-initiator-utils".
yum install iscsi-initiator-utils  
+
 
 +
<code>yum install iscsi-initiator-utils</code>
 +
 
 
Now discover the iSCSI-target.
 
Now discover the iSCSI-target.
[root@clvmkvm1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.178.5:3260
 
  
192.168.178.5:3260,1 clvmsan 192.168.178.5
+
<code>[root@clvmkvm1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.178.5:3260</code>
 +
 
 +
<code>192.168.178.5:3260,1 clvmsan 192.168.178.5</code>
 +
 
 
The iSCSI discovery action provides the exact target name and record to use for the iSCSI target login. In our example the target name is "clvmsan":
 
The iSCSI discovery action provides the exact target name and record to use for the iSCSI target login. In our example the target name is "clvmsan":
iscsiadm -m node -T clvmsan -p 192.168.178.5:3260 --login  
+
 
 +
<code>iscsiadm -m node -T clvmsan -p 192.168.178.5:3260 --login</code>
 +
 
 
Now the iSCSI target is connected and appears in /proc/partitions as new device "sdb"
 
Now the iSCSI target is connected and appears in /proc/partitions as new device "sdb"
[root@clvmkvm1 ~]# cat /proc/partitions
 
  
major minor #blocks name
+
<code>[root@clvmkvm1 ~]# cat /proc/partitions</code>
 +
 
 +
<code>major minor #blocks name</code>
 +
 
 +
<code>8 0 312571224 sda</code>
  
8 0 312571224 sda
+
<code>8 1 512000 sda1</code>
  
8 1 512000 sda1
+
<code>8 2 307200000 sda2</code>
  
8 2 307200000 sda2
+
<code>8 3 204800 sda3</code>
  
8 3 204800 sda3
+
<code>8 16 51200000 sdb</code>
  
8 16 51200000 sdb
+
<code>253 0 2048000 dm-0</code>
  
253 0 2048000 dm-0
+
<code>[root@clvmkvm1 ~]#</code>
  
[root@clvmkvm1 ~]#
 
 
Please create a partition on the iSCSI target device using fdisk. Set the type of the partition to "8e" (LVM). It should look similar to the following fdisk output:
 
Please create a partition on the iSCSI target device using fdisk. Set the type of the partition to "8e" (LVM). It should look similar to the following fdisk output:
[root@clvmkvm1 ~]# fdisk /dev/sdb
 
  
Command (m for help): p
+
<code>[root@clvmkvm1 ~]# fdisk /dev/sdb</code>
 +
 
 +
<code>Command (m for help): p</code>
 +
 
 +
<code>Disk /dev/sdb: 52.4 GB, 52428800000 bytes</code>
  
Disk /dev/sdb: 52.4 GB, 52428800000 bytes
+
<code>64 heads, 32 sectors/track, 50000 cylinders</code>
  
64 heads, 32 sectors/track, 50000 cylinders
+
<code>Units = cylinders of 2048 * 512 = 1048576 bytes</code>
  
Units = cylinders of 2048 * 512 = 1048576 bytes
+
<code>Sector size (logical/physical): 512 bytes / 512 bytes</code>
  
Sector size (logical/physical): 512 bytes / 512 bytes
+
<code>I/O size (minimum/optimal): 512 bytes / 512 bytes</code>
  
I/O size (minimum/optimal): 512 bytes / 512 bytes
+
<code>Disk identifier: 0xa60f79fa</code>
  
Disk identifier: 0xa60f79fa
+
<code>Device Boot Start End Blocks Id System</code>
  
Device Boot Start End Blocks Id System
+
<code>/dev/sdb1 1 50000 51199984 8e Linux LVM</code>
  
/dev/sdb1 1 50000 51199984 8e Linux LVM
+
<code>Command (m for help):</code>
  
Command (m for help):
+
Now please setup the new partition as physical device for LVM and create a LVM volume group. For this How-To we have used "clvmsan" for the LVM volume group name:
Now please setup the new partition as physical device for LVM and create a LVM volume group. For this howto we have used "clvmsan" for the LVM volume group name:
+
 
pvcreate /dev/sdb1
+
<code>pvcreate /dev/sdb1</code>
 +
 
 +
<code>vgcreate clvmsan /dev/sdb1</code>
  
vgcreate clvmsan /dev/sdb1
 
 
The new created LVM volume group will now appear on both systems but it is not yet clustered.
 
The new created LVM volume group will now appear on both systems but it is not yet clustered.
  
= Adapt the LVM configuration for clustering =
+
=== Adapt the LVM configuration for clustering ===
 
On both systems please edit /etc/lvm/lvm.conf and set the following parameters:
 
On both systems please edit /etc/lvm/lvm.conf and set the following parameters:
locking_type = 3
 
  
fallback_to_clustered_locking = 0
+
<code>locking_type = 3</code>
 +
 
 +
<code>fallback_to_clustered_locking = 0</code>
 +
 
 +
<code>fallback_to_local_locking = 0</code>
  
fallback_to_local_locking = 0
 
 
After that all LVM commands will fail because the underlaying LVM cluster is not yet started.
 
After that all LVM commands will fail because the underlaying LVM cluster is not yet started.
  
= Setup the LVM cluster =
+
=== Setup the LVM cluster ===
 
On both systems please install the "lvm2-cluster" and "cman" packages.
 
On both systems please install the "lvm2-cluster" and "cman" packages.
yum install lvm2-cluster cman
+
 
 +
<code>yum install lvm2-cluster cman</code>
 +
 
 
To configure the cluster topology please create a /etc/cluster/cluster.conf configuration file equal on both nodes as the following:
 
To configure the cluster topology please create a /etc/cluster/cluster.conf configuration file equal on both nodes as the following:
<cluster name="clvmkvm" config_version="1">
 
  
<cman two_node="1" expected_votes="1"/>
+
<code><cluster name="clvmkvm" config_version="1"></code>
 +
 
 +
<code><cman two_node="1" expected_votes="1"/></code>
  
<clusternodes>
+
<code><clusternodes></code>
  
<clusternode name="clvmkvm1" nodeid="1">
+
<code><clusternode name="clvmkvm1" nodeid="1"></code>
  
</clusternode>
+
<code></clusternode></code>
  
<clusternode name="clvmkvm2" nodeid="2">
+
<code><clusternode name="clvmkvm2" nodeid="2"></code>
  
</clusternode>
+
<code></clusternode></code>
  
</clusternodes>
+
<code></clusternodes></code>
  
</cluster>
+
<code></cluster></code>
  
* Please notice!
+
'''NOTE:'''
  
 
Since a 2 node cluster cannot form a quorum on behalf of 2 votes please make sure to have second line configured as in the above example. Cluster with 3 or more nodes do not need this special configuration option.
 
Since a 2 node cluster cannot form a quorum on behalf of 2 votes please make sure to have second line configured as in the above example. Cluster with 3 or more nodes do not need this special configuration option.
  
 
Now please restart the "cman" service on both systems to activate the cluster:
 
Now please restart the "cman" service on both systems to activate the cluster:
/etc/init.d/cman restart
+
 
 +
<code>/etc/init.d/cman restart</code>
 +
 
 
To check the cluster status please use the "cman_tool":
 
To check the cluster status please use the "cman_tool":
[root@clvmkvm1 ~]# cman_tool nodes
 
  
Node Sts Inc Joined Name
+
<code>[root@clvmkvm1 ~]# cman_tool nodes</code>
 +
 
 +
<code>Node Sts Inc Joined Name</code>
 +
 
 +
<code>1 M 312 2014-02-20 11:14:16 clvmkvm1</code>
  
1 M 312 2014-02-20 11:14:16 clvmkvm1
+
<code>2 M 324 2014-02-20 11:15:45 clvmkvm2</code>
  
2 M 324 2014-02-20 11:15:45 clvmkvm2
+
<code>[root@clvmkvm1 ~]#</code>
  
[root@clvmkvm1 ~]#
 
 
Now please restart the "clvmd" service on both systems to activate the clustered volume group:
 
Now please restart the "clvmd" service on both systems to activate the clustered volume group:
/etc/init.d/clvmd restart
+
 
 +
<code>/etc/init.d/clvmd restart</code>
 +
 
 
Running the regular "vgs" command shows the clustered volume group. Please notice the 'c' attributes which stands for "clusterd"
 
Running the regular "vgs" command shows the clustered volume group. Please notice the 'c' attributes which stands for "clusterd"
[root@clvmkvm1 ~]# vgs
 
  
VG #PV #LV #SN Attr VSize VFree
+
<code>[root@clvmkvm1 ~]# vgs</code>
  
clvmsan 1 1 0 wz--nc 48.82g 46.87g
+
<code>VG #PV #LV #SN Attr VSize VFree</code>
 +
 
 +
<code>clvmsan 1 1 0 wz--nc 48.82g 46.87g</code>
 +
 
 +
<code>[root@clvmkvm1 ~]#</code>
  
[root@clvmkvm1 ~]#
 
 
For managing the now clustered LVM volume the regular LVM commands can be used. The CLVMD service will make sure to use a clustered locking mechanism and to distribute the LVM meta data across the cluster nodes.
 
For managing the now clustered LVM volume the regular LVM commands can be used. The CLVMD service will make sure to use a clustered locking mechanism and to distribute the LVM meta data across the cluster nodes.
  
= Integrate the VM Hosts into openQRM =
+
=== Integrate the VM Hosts into openQRM ===
 
Copy (scp) the openqrm-local-server integration tool from the openQRM server to the 2 systems dedicated for the VM Hosts:
 
Copy (scp) the openqrm-local-server integration tool from the openQRM server to the 2 systems dedicated for the VM Hosts:
scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.135:
 
  
scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.136:
+
<code>scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.135:</code>
 +
 
 +
<code>scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.136:</code>
 +
 
 
Then login (ssh) to each system and run the openqrm-local-server tool with the 'integrate' parameter. On the first system:
 
Then login (ssh) to each system and run the openqrm-local-server tool with the 'integrate' parameter. On the first system:
./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http
+
 
 +
<code>./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http</code>
 +
 
 
and on the second system:
 
and on the second system:
./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm2 -i br0 -s http
+
 
 +
<code>./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm2 -i br0 -s http</code>
 +
 
 
Here how it looks like on the first system terminal console:
 
Here how it looks like on the first system terminal console:
root@clvmkvm1:~# chmod +x openqrm-local-server
 
  
root@clvmkvm1:~# ./openqrm-local-server
+
<code>root@clvmkvm1:~# chmod +x openqrm-local-server</code>
  
Usage : ./openqrm-local-server integrate -u -p -q [ -n ] [-i ] [-s ]
+
<code>root@clvmkvm1:~# ./openqrm-local-server</code>
  
./openqrm-local-server remove -u -p -q [ -n ] [-s ]
+
<code>Usage : ./openqrm-local-server integrate -u -p -q [ -n ] [-i ] [-s ]</code>
  
root@clvmkvm11:~# ./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http
+
<code>./openqrm-local-server remove -u -p -q [ -n ] [-s ]</code>
  
Integrating system to openQRM-server at 192.168.178.5
+
<code>root@clvmkvm11:~# ./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http</code>
  
-> could not find dropbear. Trying to automatically install it ...
+
<code>Integrating system to openQRM-server at 192.168.178.5</code>
  
Reading package lists... Done
+
<code>-> could not find dropbear. Trying to automatically install it ...</code>
  
Building dependency tree
+
<code>Reading package lists... Done</code>
  
........... (more output and automatic package installation)
+
<code>Building dependency tree</code>
 +
 
 +
<code>........... (more output and automatic package installation)</code>
 +
 
 +
<code>root@clvmkvm1:~#</code>
  
root@clvmkvm1:~#
 
 
The "local-server" integration automatically adds the 2 systems to openQRM and creates for a each system a "server" object. Please edit the "server" object of both systems and set the "Virtualization" type to "KVM Host". This will automatically create 2 new "storage" objects in openQRM which can be used to manage the clustered LVM VM disk space.
 
The "local-server" integration automatically adds the 2 systems to openQRM and creates for a each system a "server" object. Please edit the "server" object of both systems and set the "Virtualization" type to "KVM Host". This will automatically create 2 new "storage" objects in openQRM which can be used to manage the clustered LVM VM disk space.
  
* Please notice!
+
'''NOTE:'''
  
 
Please notice that LVM snapshots are NOT supported for clustered LVM! In case you are using openQRM IaaS cloud please adapt the following cloud-deployment hook for KVM. Edit:
 
Please notice that LVM snapshots are NOT supported for clustered LVM! In case you are using openQRM IaaS cloud please adapt the following cloud-deployment hook for KVM. Edit:
/usr/share/openqrm/plugins/kvm/web/openqrm-kvm-lvm-deployment-cloud-hook.php
+
 
 +
<code>/usr/share/openqrm/plugins/kvm/web/openqrm-kvm-lvm-deployment-cloud-hook.php</code>
 +
 
 
and change the line:
 
and change the line:
$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm snap -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;
+
 
to
+
<code>$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm snap -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;</code>
$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm clone -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;
+
 
 +
to  
 +
 
 +
<code>$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm clone -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;</code>
 +
 
 
This configures the openQRM IaaS Cloud to use the "clone" command instead of "snap".
 
This configures the openQRM IaaS Cloud to use the "clone" command instead of "snap".
  
* Please notice!
+
'''NOTE:'''
  
 
In the upcoming openQRM Enterprise release the "clone" action for KVM got enhanced to allow automatic resizing from small master images to bigger volumes for e.g. Cloud deployment. A new "Image cache" feature in 5.1.2 provides the capability to locally cache master images and efficiently clone them from the local cache.
 
In the upcoming openQRM Enterprise release the "clone" action for KVM got enhanced to allow automatic resizing from small master images to bigger volumes for e.g. Cloud deployment. A new "Image cache" feature in 5.1.2 provides the capability to locally cache master images and efficiently clone them from the local cache.
  
We hope you have enjoyed this openQRM howto sponsored by openQRM Enterprise!
+
=== '''Congratulations!!''' ===
 +
You have successfully completed this How-To!

Latest revision as of 09:54, 10 November 2020

This How-To explains how to set up a shared storage space for VM virtual disks using clustered LVM (CLVMD). CLVMD (Clustered LVM Daemon) is a service which runs on top of regular LVM. It is using the "CMAN" HA-framework for the cluster communications. This How-To explains the configuration of 2 (or more) additional bare-metal systems used as KVM virtualization hosts which are configured with a shared clustered LVM volume group. For this How-To we are using the CentOS Linux distribution for the additional systems.

Requirements

  • Three physical Server. Alternatively openQRM itself can be installed within a Virtual Machine
  • at least 1 GB of Memory
  • at least 100 GB of Diskspace
  • VT (Virtualization Technology) enabled in the Systems BIOS for the Xen Host system so it can run HVM Virtual Machines later
  • Minimal Debian installation on a physical server.
  • openQRM 5.1 or later, that has been initialised.

NOTE: For this How-To, we will be using the same openQRM server used in the Virtualisation with KVM and openQRM on Debian How-To, so it is recommended to complete that How-To before following this one. If you do not have openQRM installed yet, follow this How-To: Install openQRM on Debian.

Install CentOS on the KVM Hosts

Install the latest CentOS on two (or more) physical systems dedicated for the KVM virtualization hosts. Partition the disk as you like and use a minimal package installation. openQRM will later automatically fetch all further needed package dependencies. Please configure both systems with a static IP address from the openQRM management network. After the installation reboot and SSH into the 2 systems.

For this How-To we are going to use "clvmkvm1" as hostname with the IP address 192.168.178.135 for the first system and "clvmkvm2" as hostname with the IP address 192.168.178.136 for the second system.

  • Please notice!

After the CentOS installation, make sure to enable the additional "rpmforge" and "epel" package repositories. Please check http://wiki.centos.org/AdditionalResources/Repositories/RPMForge and http://www.rackspace.com/knowledge_center/article/installing-rhel-epel-repo-on-centos-5x-or-6x how to enable those 2 package repositories for CentOS.

Post configuration on the CentOS KVM Hosts

As the first step after the OS installation please configure one (or more) bridges. On both systems please run:

yum install bridge-utils

to install the bridge-utils.

In /etc/sysconfig/network-scripts/ please adapt ifcfg-eth0 as following:

# Interface for bridge br0

DEVICE=eth0

ONBOOT=yes

TYPE=Ethernet

IPV6INIT=no

USERCTL=no

BRIDGE=br0

Then create an ifcfg-br0 as following (here for example on the first system):

DEVICE=br0

BOOTPROTO=static

ONBOOT=yes

IPADDR=192.168.88.135

NETMASK=255.255.255.0

GATEWAY=192.168.88.1

DELAY=0

HELLO=2

MAXAGE=12

STP=no

TYPE=Bridge

NOTE:

For the second system please use IPADDR=192.168.88.136

After that, reboot both systems to activate the new network configuration. It may be sufficient to restart the network service instead of rebooting.

If you plan to attach further external networks to your VMs please setup bridges to the external network(s) in the same way as for the openQRM management network described above.

Now please configure /etc/hosts on both systems and add the following lines:

192.168.88.135    clvmkvm1

192.168.88.136  clvmkvm2

This is to make sure the clustered systems know each other by hostname.

NOTE:

Please disable SELinux by adapting the /etc/selinux/config to:

SELINUX=disabled

Further make sure to also disable the iptables and ip6tables firewall by running:

chkconfig --del iptables

chkconfig --del ip6tables

Now please reboot the 2 systems to deactivate selinux and the firewall.

To recheck your SELinux configuration is really disabled please run:

 selinuxenabled 
 echo $? 

It should return "1" now (disabled).

To recheck your iptables firewall is really disabled please run:

 iptables -L 

Connect the SAN/iSCSI-target to both systems

The configuration details for connecting a SAN and/or iSCSI device can be different in your setup! For this How-To we are using a simple iSCSI-target created with "ietd" without any CHAP authentication on the openQRM server system. The easiest way is to use the "iscsi-storage" plugin in openQRM to create such an iSCSI target.

On both systems please install the "iscsi-initiator-utils".

yum install iscsi-initiator-utils

Now discover the iSCSI-target.

[root@clvmkvm1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.178.5:3260

192.168.178.5:3260,1 clvmsan 192.168.178.5

The iSCSI discovery action provides the exact target name and record to use for the iSCSI target login. In our example the target name is "clvmsan":

iscsiadm -m node -T clvmsan -p 192.168.178.5:3260 --login

Now the iSCSI target is connected and appears in /proc/partitions as new device "sdb"

[root@clvmkvm1 ~]# cat /proc/partitions

major minor #blocks name

8 0 312571224 sda

8 1 512000 sda1

8 2 307200000 sda2

8 3 204800 sda3

8 16 51200000 sdb

253 0 2048000 dm-0

[root@clvmkvm1 ~]#

Please create a partition on the iSCSI target device using fdisk. Set the type of the partition to "8e" (LVM). It should look similar to the following fdisk output:

[root@clvmkvm1 ~]# fdisk /dev/sdb

Command (m for help): p

Disk /dev/sdb: 52.4 GB, 52428800000 bytes

64 heads, 32 sectors/track, 50000 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xa60f79fa

Device Boot Start End Blocks Id System

/dev/sdb1 1 50000 51199984 8e Linux LVM

Command (m for help):

Now please setup the new partition as physical device for LVM and create a LVM volume group. For this How-To we have used "clvmsan" for the LVM volume group name:

pvcreate /dev/sdb1

vgcreate clvmsan /dev/sdb1

The new created LVM volume group will now appear on both systems but it is not yet clustered.

Adapt the LVM configuration for clustering

On both systems please edit /etc/lvm/lvm.conf and set the following parameters:

locking_type = 3

fallback_to_clustered_locking = 0

fallback_to_local_locking = 0

After that all LVM commands will fail because the underlaying LVM cluster is not yet started.

Setup the LVM cluster

On both systems please install the "lvm2-cluster" and "cman" packages.

yum install lvm2-cluster cman

To configure the cluster topology please create a /etc/cluster/cluster.conf configuration file equal on both nodes as the following:

<cluster name="clvmkvm" config_version="1">

<cman two_node="1" expected_votes="1"/>

<clusternodes>

<clusternode name="clvmkvm1" nodeid="1">

</clusternode>

<clusternode name="clvmkvm2" nodeid="2">

</clusternode>

</clusternodes>

</cluster>

NOTE:

Since a 2 node cluster cannot form a quorum on behalf of 2 votes please make sure to have second line configured as in the above example. Cluster with 3 or more nodes do not need this special configuration option.

Now please restart the "cman" service on both systems to activate the cluster:

/etc/init.d/cman restart

To check the cluster status please use the "cman_tool":

[root@clvmkvm1 ~]# cman_tool nodes

Node Sts Inc Joined Name

1 M 312 2014-02-20 11:14:16 clvmkvm1

2 M 324 2014-02-20 11:15:45 clvmkvm2

[root@clvmkvm1 ~]#

Now please restart the "clvmd" service on both systems to activate the clustered volume group:

/etc/init.d/clvmd restart

Running the regular "vgs" command shows the clustered volume group. Please notice the 'c' attributes which stands for "clusterd"

[root@clvmkvm1 ~]# vgs

VG #PV #LV #SN Attr VSize VFree

clvmsan 1 1 0 wz--nc 48.82g 46.87g

[root@clvmkvm1 ~]#

For managing the now clustered LVM volume the regular LVM commands can be used. The CLVMD service will make sure to use a clustered locking mechanism and to distribute the LVM meta data across the cluster nodes.

Integrate the VM Hosts into openQRM

Copy (scp) the openqrm-local-server integration tool from the openQRM server to the 2 systems dedicated for the VM Hosts:

scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.135:

scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server 192.168.178.136:

Then login (ssh) to each system and run the openqrm-local-server tool with the 'integrate' parameter. On the first system:

./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http

and on the second system:

./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm2 -i br0 -s http

Here how it looks like on the first system terminal console:

root@clvmkvm1:~# chmod +x openqrm-local-server

root@clvmkvm1:~# ./openqrm-local-server

Usage : ./openqrm-local-server integrate -u -p -q [ -n ] [-i ] [-s ]

./openqrm-local-server remove -u -p -q [ -n ] [-s ]

root@clvmkvm11:~# ./openqrm-local-server integrate -u openqrm -p openqrm -q 192.168.178.5 -n clvmkvm1 -i br0 -s http

Integrating system to openQRM-server at 192.168.178.5

-> could not find dropbear. Trying to automatically install it ...

Reading package lists... Done

Building dependency tree

........... (more output and automatic package installation)

root@clvmkvm1:~#

The "local-server" integration automatically adds the 2 systems to openQRM and creates for a each system a "server" object. Please edit the "server" object of both systems and set the "Virtualization" type to "KVM Host". This will automatically create 2 new "storage" objects in openQRM which can be used to manage the clustered LVM VM disk space.

NOTE:

Please notice that LVM snapshots are NOT supported for clustered LVM! In case you are using openQRM IaaS cloud please adapt the following cloud-deployment hook for KVM. Edit:

/usr/share/openqrm/plugins/kvm/web/openqrm-kvm-lvm-deployment-cloud-hook.php

and change the line:

$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm snap -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;

to

$image_clone_cmd="$OPENQRM_SERVER_BASE_DIR/openqrm/plugins/kvm/bin/openqrm-kvm clone -n ".$image_location_name." -v ".$volume_group." -s ".$image_clone_name." -m ".$disk_size." -t ".$deployment->type." -u ".$openqrm_admin_user->name." -p ".$openqrm_admin_user->password;

This configures the openQRM IaaS Cloud to use the "clone" command instead of "snap".

NOTE:

In the upcoming openQRM Enterprise release the "clone" action for KVM got enhanced to allow automatic resizing from small master images to bigger volumes for e.g. Cloud deployment. A new "Image cache" feature in 5.1.2 provides the capability to locally cache master images and efficiently clone them from the local cache.

Congratulations!!

You have successfully completed this How-To!