Virtualisation with VMware ESXi and openQRM on Debian
This How-To is about how to create and manage VMware ESX Virtual Machines on Debian with openQRM. This How-To requires an additional system dedicated for the VMware ESX Host and it shows how to integrate the ESX Host into an existing openQRM environment.
Requirements
- Two physical Server. Alternatively openQRM itself can be installed within a Virtual Machine
- at least 1 GB of Memory
- at least 100 GB of Diskspace
- VT (Virtualization Technology) enabled in the Systems BIOS for the VMware ESX Host system so it can run HVM Virtual Machines later
- A minimal Debian installation on a physical Server
- openQRM installed
NOTE A detailed How-To about the above initial starting point is available at Install openQRM on Debian |
For this How-To we have used the same openQRM server as for the How-Tos about Install openQRM on Debian, Virtualisation with KVM and openQRM on Debian, 'Virtualisation with Xen and openQRM on Debian' or 'Automated Amazon EC2 Cloud deployments with openQRM on Debian'. That means with this How-To we are going to add functionality to an existing openQRM setup. This is to show that openQRM manages all different virtualization and deployment types seamlessly.
Set a custom Domain name
NOTE Only needed if you haven't start with one of the previous How-Tos! |
As the first step after the openQRM installation and initialization it is recommended to configure a custom domain name for the openQRM management network. In this Use-Case the openQRM Server has the private Class C IP address 192.168.178.5/255.255.255.0 based on the previous 'How-To install openQRM 5.1 on Debian Wheezy' (URL). Since the openQRM management network is a private one any syntactically correct domain name can be used e.g. 'my123cloud.net'. The default domain name pre-configured in the DNS plugin is "oqnet.org".
Best practice is to use the 'openqrm' command line util to setup the domain name for the DNS plugin. Login to the openQRM Server system and run the following command as 'root' in a terminal:
/usr/share/openqrm/bin/openqrm boot-service configure -n dns -a default -k OPENQRM_SERVER_DOMAIN -v my123cloud.net
The output of the above command will look like
root@debian:~# /usr/share/openqrm/bin/openqrm boot-service configure -n dns -a default -k OPENQRM_SERVER_DOMAIN -v my123cloud.net
Setting up default Boot-Service Konfiguration of plugin dns
root@debian:~#
To (re)view the current configuration of the DNS plugin run:
/usr/share/openqrm/bin/openqrm boot-service view -n dns -a default
Enabling Plugins
In the openQRM Plugin Manager enable and start the following plugins in the sequence below:
- dns plugin - type Networking
- dhcpd plugin- type Networking
- tftpd plugin - type Networking
- network-manager plugin - type Networking
- local-server plugin - type Misc
- device-manager plugin - type Management
- novnc plugin - type Management
- sshterm plugin - type Management
- linuxcoe plugin - type Deployment
- nfs-storage plugin - type Storage
- vmware-esx plugin - type Virtualisation
Hint: You can use the filter in the plugin list to find plugins by their type easily!
Install VMware ESX on the second system
Install
Network configuration
Install the VMware vSphere Perl SDK on the openQRM server
Login to http://www.vmware.com and download the latest VMware vSphere Perl SDK.
Copy the downloaded file to the openQRM server and run the following commands to unpack and install it:
dpkg --add-architecture i386
apt-get update
apt-get install libarchive-zip-perl libcrypt-ssleay-perl libclass-methodmaker-perl libdata-dump-perl libsoap-lite-perl perl-doc libssl-dev libuuid-perl liburi-perl libxml-libxml-perl
NOTE
During the VMWare SDK installation, the VMWare installer only searches for Ubuntu in the release file. |
To succeed the installation on Debian add the parameter directly on the command line running as root:
echo "UBUNTU=ubuntu" >> /etc/os-release
Then unpack the SDK zip-file and run the installer
tar -xzvf VMware-vSphere-Perl-SDK-5.1.0-780721.x86_64.tar.gz
cd vmware-vsphere-cli-distrib/
export http_proxy=
export ftp_proxy=
./vmware-install.pl
During the SDK installation accept the licence and the default parameters.
NOTE
Newer libwww-perl versions do not allow any connection or access to self-signed SSL services. While older versions include a switch to allow this via environment parameters newer versions do not include this switch any more. Therefore the VMWare Perl API which is using libwww-perl may bail out with the error "server version unavailable". |
To fix this run the following commands as root on the openQRM server system to install a supported libwww-perl package directly from the CPAN repository:
apt-get remove libwww-perl
rm -rf .cpan
perl -MCPAN -e shell
cpan> install GAAS/libwww-perl-5.837.tar.gz
Autodiscover the VMware ESX Host
Go to Plugins -> Virtualization -> ESX -> Discovery and click on 'Autodiscover ESX Hosts'
The autodiscovery now searches the openQRM management network for existing ESX Hosts.
During the autodiscovery the event list shows an active event to inform about the action running in the background.
Here a screenshot of the discovered ESX Host.
Click on 'Add' to add the ESX Hosts to openQRM datacentre repository. In the 'Add' form provide the admin user credentials to manage the ESX Host. When finished click on 'Submit'
NOTE
Make sure to provide the exact hostname PLUS domain name in this form! The VMWare API is really serious that the correct full FQDN is used for the Host configuration. |
To re-check the exact domain name configuration of the ESX Host login to the ESX Console
You can find the configured domain name in the ESX console - Configure Management Network - Custom DNS Suffixes
Here the successful integrated ESX Host.
The integration automatically created a server object from the type ESX Host
Also an ESX Storage object got created automatically
Create a NFS Datastore
To store the Virtual Machine Images and configuration VMWare ESX is using 'Datastore'. Several different storage types are supported to be attached to an ESX Host as Datastore e.g. NAS, iSCSI or FC SAN. For this How-To a simple NAS/NFS share is used for the initial Datastore on the ESX Host. To create a NFS share for the Datastore we are using the 'nfs-storage' plugin. First step is to create a new storage object from the type 'nfs-deployment'. Go to Datacentre -> Components -> Storage -> Add new Storage
Provide a name for the storage object, select 'nfs-deployment' as Deployment Type and choose the openQRM server as resource.
Creating the new storage forwards to the storage overview. Click on 'Manage' of the new created NFS-Storage Object.
In the Volume overview click on Add new Volume
Provide a name for the new volume, we choose 'esx_datastore' for this How-To.
Here again the NFS volume overview with the new created volume listed. Now click on 'Auth' to authenticate the new created NFS share against the ESX Hosts ip.
Fill in the ESX Hosts ip and click on 'Submit'.
Now the NFS volume is correctly authenticated against the ESX Hosts ip.
We are going to attach this NFS share as Datastore to the ESX Host later in this How-To. First we go on with creating a automatic-installation profile with LinuxCOE.
Create a LinuxCOE automatic-installation template
NOTE
Only needed if you haven't started with the 'Virtualization with KVM and openQRM 5.1 on Debian Wheezy', 'Virtualization with Xen and openQRM 5.1 on Debian Wheezy' or 'Automated Amazon EC2 Cloud deployments with openQRM 5.1 on Debian Wheezy' How-To! |
The LinuxCOE Project provides a useful UI to create automatic-installation ISO images for various Linux distribution e.g. preseed, kickstart and autoyast. Those ISO images can be then used to fully automatically install a Linux distribution without any manual interaction needed. The integration of LinuxCOE in openQRM makes those automatic-installation ISO images automatically available on all Virtualization Hosts (mounted by nfs at /linuxcoe from the openQRM server). This makes it easy to configure a Virtual Machines installation boot image from the central ISO Pool mount point.
NOTE
The LinuxCOE plugin in openQRM comes with a fully automatic setup and pre-configuration of LinuxCOE. Since LinuxCOE is an installation-framework it is recommended to add further custom configuration such as local package mirrors, new distribution data and config files etc. Read more about how to further enhance your LinuxCOE installation at http://linuxcoe.sourceforge.net/#documentation. |
First step is to create a new automatic-installation profile and ISO image.
Go to Plugins -> Deployment -> LinuxCOE -> Create Templates and select a Linux distribution and version for the automatic-installation. For this How-To we will use 'Debian Squeeze 64bit'. Leave the hostname input empty since openQRM will care about this via its dhcpd plugin.
Leave the default settings on the next page of LinuxCOE wizard.
Next select a Mirror from the list.
Here provide your custom package setup for the automatic-installation.
On the following page leave the default settings.
The summary page of the LinuxCOE wizard allows to preconfigure a root and user account. Click on 'Go for it' to create the automatic-installation template.
The ISO image is created. No need to download it since it will be used directly by the VMware ESX Host for a VMware ESX Virtual Machine installation from a central '/linuxcoe-iso' NFS share on the openQRM server.
Go to Plugins -> Deployment -> LinuxCOE -> Template Manager and click on 'Edit'.
Now provide a description for the just created automatic-installation template.
Here the list view of the updated automatic-installation template.
Attach Datastore(s) to the ESX Host
Go to Plugins -> Virtualization -> ESX -> Datastore and select the ESX Host.
In the Datastore overview click on 'Add new NAS Datastore'.
Here add the previously created NFS Volume as new Datastore. The path for the NFS share is '/exports/esx_datastore'
Now click again 'Add new NAS Datastore'. We are going to add the automatically shared LinuxCOE ISO Pool to the ESX Hosts as another Datastore. This datastore is used to provide ISO Images for Operating Systems installations.
To add the LinuxCOE ISO Pool provide a name for the datastore, the IP address of the openQRM server and as path '/linuxcoe'
Here again the Datastore overview with the 2 new datastores created.
Configure Networking and VSwitches on the ESX Host
Go to Plugins -> Virtualization -> ESX -> Network and select the ESX Host to add and manage the Virtual Network Switches on the ESX Host.
Create a new VMware ESX Virtual Machine
Use openQRM's Server Wizard to add a new VMware ESX Virtual Machine. This Wizard works in the same way for physical systems, VMware ESX VMs, VMware ESX VMs, Citrix VMware ESXServer VMs, VMware VMs, LXC VMs and openVZ VMs.
Go to Datacentre -> Server -> Add a new Server
Give a name for the new server. Easiest is to use the 'generate name' button. Also provide a useful description.
In the Resource-Selection click on 'new resource'. A resource in openQRM is a logical generic object which is mapped to a physical system or Virtual Machine of any type.
On the next page find a selection of different resource types to create. Choose 'VMware ESX (localboot) Virtual Machine'
This forwards to the VMware ESX Host selection. Select the VMware ESX Host as the Virtualization Host of the VM
On the VMware ESX Virtual Machine Overview click on 'Add local VM'
In the VM add form provide a name for the new VM. Again the easiest is to use the 'generate name' button. There are lots of different parameters which can be configured. Anyway you can go with most of the default selection. Just make sure the first network card of the VM is connected to the 'Default VM Network'
Further down the VM add form configure the boot sequence of the VM. Select 'iso' and open the Filepicker by clicking on the 'Browse' button. This will open a small new window listing all ISO Images found on the ESX Virtualization Host. Navigate to '/linuxcoe' and select the previously created LinuxCOE automatic-installation iso image. Notice that the name of the iso image may be different in your setup.
Click on 'Submit' to create the new VM
Creating the new VM automatically forwards back into the server wizard with the new created resource available. Select the new resource and 'Submit'
An Image for the new ESX VM got automatically created during the VM creation. Select the VM Image and click on Submit
Click on 'Submit' to edit the Image description
Add your Image description
The last step in the server wizard presents the full configuration and allows to further setup network, management, monitoring and deployment configuration. Click on 'Submit' to save the server configuration.
The server overview list the new server, not yet activated. Select the new created server and click on 'Start'
Confirm starting the server
Starting the logical server object triggers to actually start the resource (the VMware ESX VM) with the configured Image (the LVM volume) and triggers additional automatic configuration tasks via a plugin-hook. This server start-and-stop hooks are "asking" each activated plugin if there is "some work to do". For a few examples how hooks are used in openQRM check the list below:
- The DNS plugin is using those hooks to automatically add (or remove) the server name into the managed bind server
- The Dhcpd plugin add the "hostname option" for the server to its configuration
- The Nagios plugin adds/removes service checks for automatic monitoring
- The Puppet plugin activates configured application recipes to automatically setup and pre-configure services on the VM
Go to Plugins -> Virtualization -> VMware ESX -> VMware ESX VMs and select the VMware ESX Host system.
In the VMware ESX VM overview click on the 'console' button of the VM. This opens a VNC console within your web browser
NOTE You need to deactivate the browsers Pop-up Blocker for the openQRM website! |
To start the automatic installation type 'install' in the VNC console and press ENTER.
The VMware ESX VM is now automatically installing a Debian Linux distribution. Good time for you to grab a coffee!
NOTE After the automatic installation via the attached LinuxCOE ISO image the VM reboots to the install screen again. We now have to re-configure the VMs boot-sequence to 'local-boot'. To do this follow the steps below: |
Go to Plugins -> Virtualization -> ESX -> Discovery and click on 'Autodiscover ESX Hosts'
- Stop the VM in the ESX VM overview - Plugins -> Virtualization -> ESX -> VMs -> select ESX Host + Stop VM
- In Plugins -> Virtualization -> VMware ESX -> VMware ESX VMs select the VMware ESX Host and update the VM to boot 'local'
- Now start the VM again - Plugins -> Virtualization -> ESX -> VMs -> select ESX Host + Start VM
Here a screenshot of the completed Debian installation after setting the boot-sequence of the VM to 'local'
Install the 'openqrm-local-vm-client'
Now it is recommended to install the 'openqrm-local-vm-client' on the fresh installed system. For local-installed Virtual Machines (e.g. xen(local VM), xen(local VM), lxc(local VM), openvz(local VM) which have access to the openQRM network the 'openqrm-local-vm-client' activates the plugin-client-boot-services to allow further management functionality (e.g. application deployment with Puppet, system statistics with Collectd etc). Monitoring and openQRM actions are still running on behalf of the VM Host.
To install the 'openqrm-local-vm-client' on VM follow the steps below:
1. Copy the 'openqrm-local-vm-client' utility to the running VM
cd /usr/share/openqrm/plugins/local-server/web/local-vm/
scp openqrm-local-vm-client [ip-address-of-the-VM]:/tmp/
2. Then login to the VM
ssh [ip-address-of-the-VM]
3. This prompts for the password which was configured in the LinuxCOE automatic-installation template. Give the password and execute the openqrm-local-vm-client utility
/tmp/openqrm-local-vm-client
This will automatically setup the 'openqrm-local-vm-client' in the system init and start it.
Full control for KVM, Xen, VMware and Amazon EC2 systems within a single management console
Here the Datacentre Dashboard after we have created the VMware ESX Virtual Machine
Congratulations!
You have successfully installed VMware ESX Virtualisation on openQRM!