Difference between revisions of "How to build Proxmox tmpfs image"

From openQRM
(added nfs-kernel-server to apt-get install)
 
(17 intermediate revisions by the same user not shown)
Line 1: Line 1:
Once you have a successfully installed and running openQRM Server you can follow the steps below for a Proxmox VE Solution.
+
Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system.  
  
Or we have a pre-built Proxmox template available for download.
+
Once you have a running openQRM Server you can follow these steps.
  
Requirements:
+
This process is supported in both the community and enterprise versions of openQRM.
  
* openQRM Community or Enterprise (can be run in a KVM/VM, suggested option)
+
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)
* optional: openQRM ATU Plugin for advanced server and cluster provisioning
+
 
 +
Pre-built Proxmox VE templates are available for download in the customer portal.
 +
 
 +
 
 +
'''Why is this solution so exciting ?'''
 +
 
 +
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.
 +
 
 +
 
 +
'''Hold on this is too good to be true, what are the down sides ?'''
 +
 
 +
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.
 +
 
 +
 
 +
'''Requirements:'''
 +
* openQRM Community or Enterprise (a KVM is the suggested option)
 +
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management
 
* CPU 64bit Intel EMT64 or AMD64
 
* CPU 64bit Intel EMT64 or AMD64
 
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support
 
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support
 +
* Debian 11 Bullseye
 +
'''Suggest minimum specification for:'''
 +
* openQRM Server: 1GB & 1 CPU
 +
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.
 +
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.
 +
'''What is the ATU plugin ?'''
 +
 +
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.
 +
 +
 +
 +
'''<big>Let's Start:</big>'''
  
Suggest minimum Ram for:
+
'''1. Adding a Proxmox Kernel to openQRM:'''
 +
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb
 +
# Install Kernel locally
 +
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)
 +
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs
 +
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor
  
* openQRM Server 4GB
 
* Virtual or Hardware Node (booted via tmpfs) 8GB. 4GB for tmpfs and 4GB for OS and Services.These instructions will help you to build a Proxmox VE Solution as a tmpfs deployment. The clustering requires special initialisation which is managed by the ATU Plugin to orchestrate these steps for synchronized cluster initialisation on start and backup configuration on shutdown.
 
  
# Download PVE Kernel - http://download.proxmox.com/debian/dists/buster/pve-no-subscription/binary-amd64/pve-kernel-5.4.114-1-pve_5.4.114-1_amd64.deb
+
'''2. Creating Image suitable to TMPFS Boot:'''
# Install Kernel
+
# Create Image - To create an image for Proxmox VE (image will be named "proxmox_image") which can be used as a tmpfs image, follow these steps;
# Add Kernel to openQRM
 
## (Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)  openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs
 
## openqrm kernel add -n pve-5.4.114-1 -v 5.4.114-1-pve -u OPENQRM_USER -p OPENQRM_PASS -l / -i initramfs 
 
###''If you are using a self signed cert you may need to load the https call back manually;'' https://SERVER_NAME/openqrm/base/server/kernel/kernel-action.php?kernel_command=new_kernel&kernel_name=KERNEL_NAME&kernel_version=KERNEL_VER
 
# Create Image - To create an image for Proxmox which can be used as a tmpfs image, follow these steps;
 
 
## apt-get install debootstrap
 
## apt-get install debootstrap
## Create directory mkdir -p /exports/proxmox_image/dev/pts
+
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus
 
## debootstrap --arch amd64 buster /exports/proxmox_image/ <nowiki>https://deb.debian.org/debian/</nowiki>
 
## debootstrap --arch amd64 buster /exports/proxmox_image/ <nowiki>https://deb.debian.org/debian/</nowiki>
 
## mount --bind /dev/pts /exports/proxmox_image/dev/pts
 
## mount --bind /dev/pts /exports/proxmox_image/dev/pts
Line 31: Line 57:
 
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus
 
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus
 
## chroot /exports/proxmox_image
 
## chroot /exports/proxmox_image
## apt-get install wget net-tools screen locales collectd
+
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server
 
## dpkg-reconfigure locales
 
## dpkg-reconfigure locales
## Follow steps (Start at "Install Proxmox VE") @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
+
## Follow steps (Start at "Install Proxmox VE") @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
 
### We do not need to install grub
 
### We do not need to install grub
## set root password; passwd
+
##'''set root password; passwd'''
## implement noclear for getty/inittab;  
+
## (optional) implement noclear for getty/inittab;
 
### mkdir -p /etc/systemd/system/getty@tty1.service.d/
 
### mkdir -p /etc/systemd/system/getty@tty1.service.d/
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;<blockquote>[Service]   TTYVTDisallocate=no</blockquote>
+
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;
##'''Remember: /etc/hosts needs a valid hostname with your ip address'''
+
<code>[Service]</code>
### This is managed with the ATU plugin
+
 
## exit chroot, type exit
+
<code>TTYVTDisallocate=no</code>
## umount binds;
+
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''
### umount /exports/proxmox_image/dev/pts
+
## This is managed with the ATU plugin
### umount /exports/proxmox_image/dev
+
# exit chroot, type exit
### umount /exports/proxmox_image/proc
+
# umount binds;
### umount /exports/proxmox_image/var/run/dbus
+
## umount /exports/proxmox_image/dev/pts
## For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;
+
## umount /exports/proxmox_image/dev
 +
## umount /exports/proxmox_image/proc
 +
## umount /exports/proxmox_image/var/run/dbus
 +
# (optional) If using the ATU Plugin follow these steps;
 +
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;
 
### systemctl list-unit-files --root /exports/proxmox_image/  | grep enabled
 
### systemctl list-unit-files --root /exports/proxmox_image/  | grep enabled
## These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;
+
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;
 
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/
 
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/
 
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/
 
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/
Line 65: Line 95:
 
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/
 
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/
 
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/
 
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/
 
### If you have ceph installed disable;
 
### If you have ceph installed disable;
 
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/
 
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/
## If using the ATU Plugin then disable services: pvedaemon, pve-proxy, pve-manager, pve-cluster, cman, corosync, ceph, pvestatd, qemu-server, rrdcached, spiceproxy,
+
### If you have Ganesha installed for nfs;
 +
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable puppet  --root /exports/proxmox_image/
 +
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server  --root /exports/proxmox_image/
 +
## (if using the ATU plugin) disable services: pvedaemon, pve-proxy, pve-manager, pve-cluster, cman, corosync, ceph, pvestatd, qemu-server, rrdcached, spiceproxy,
 +
# Tar the Image;
 +
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/
 +
## cd /exports/proxmox_image
 +
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .
 +
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).
 +
 
 +
 
 +
'''3. Configuring openQRM to support above template:'''
 +
# Activate dhcpd plugin then the tftp plugin
 
# Activate NFS Storage (if not already done so)
 
# Activate NFS Storage (if not already done so)
 
## Under Plugins -> Storage -> NFS-Storage
 
## Under Plugins -> Storage -> NFS-Storage
Line 75: Line 119:
 
# Add NFS Volume (this triggers tmpfs storage)
 
# Add NFS Volume (this triggers tmpfs storage)
 
## Under Plugins -> Storage -> NFS-Storage -> Volume Admin -> Edit -> proxmox_image "ADD IMAGE"
 
## Under Plugins -> Storage -> NFS-Storage -> Volume Admin -> Edit -> proxmox_image "ADD IMAGE"
# restart server/vm in case of duplicate services started from chroot image initialisation
+
# <s>restart openQRM server/vm in case of duplicate services started from chroot image initialisation</s>
 
# Now create a TmpFs-Storage: Plugins -> Storage -> Tmpfs-storage -> Volume Admin -> New Storage
 
# Now create a TmpFs-Storage: Plugins -> Storage -> Tmpfs-storage -> Volume Admin -> New Storage
 
## Name: openqrm-tmpfs
 
## Name: openqrm-tmpfs
 
## Deployment Type: tmpfs-storage
 
## Deployment Type: tmpfs-storage
# Now Create an Image: Components -> Image  -> Add new Image -> Tmpfs-root deployment -> click edit on the "openqrm-tmpfs" -> Click "ADD NEW VOLUME"  
+
# Now Create an Image: Components -> Image  -> Add new Image -> Tmpfs-root deployment -> click edit on the "openqrm-tmpfs" -> Click "ADD NEW VOLUME"
## Name: pve6
+
## Name: pve7
 
## Size: 4 GB
 
## Size: 4 GB
## Description: proxmox ve 6
+
## Description: proxmox ve 7
# Now you will need to link a resource to a server. A resource is a blank system/server/chassis and a Server is a configuration applied to a resource/blank system. So you can either manually add a server or if a system has booted via dhcp/pxe then that system will be selectable and named "idle" for this next step.
+
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an "idle" state and will be selectable as "idle" for this next step.
 
## Click "ADD A NEW SERVER"
 
## Click "ADD A NEW SERVER"
## Select the resource or manually setup a server
+
## Select the resource
## then select n image for server, select the pve5 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.
+
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)
## then select "Install from NAS/NFS" select the "proxmox_image" as above then click submit
+
## then click "Install from NAS/NFS" select the "proxmox_image" as above then click submit
## then select the kernel 5.4.114-1-pve then click submit
+
## then select the kernel pve-5.11.22-6 then click submit
 
## Done
 
## Done
# Tar Image
+
# You will then need to "start" the server, click "start", the idle resource will then reboot and boot the image as created above
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/
+
# Once booted you may need to restart sshd and pve-cluster
## cd /exports/proxmox_image
+
## systemctl restart ssh pve-cluster
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .
+
 
# Create NFS Image link to TmpFS image
+
 
 +
'''Notes/Customisations:'''
 +
# Postfix may error a warning on boot, edit /etc/mailname
 +
#'''<u>Nodes booted with out the ATU plugin will lose configuration upon reboot!</u>'''
 +
# when changing kernel versions, a stop and start of the server is required
 +
 
 +
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.
 +
 
 +
'''About the ATU Plugin:'''
  
 +
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems.
  
Notes/Customisations:
 
  
# set root password (otherwise not able to login)
 
# Postfix may vomit a warning on boot
 
# Create directories /exports/custom/{fstab|modules|network}
 
  
  
Optional:
+
'''About openQRM:'''
  
The ATU Plugin is optimised for Proxmox Cluster Deployments and TMPFS Server Configuration Sync Initialise ATU plugin
+
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.
 +
[[Category:Howto]]
 +
[[Category:Tutorial]]
 +
[[Category:Debian]]

Latest revision as of 08:58, 16 January 2022

Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system.

Once you have a running openQRM Server you can follow these steps.

This process is supported in both the community and enterprise versions of openQRM.

You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)

Pre-built Proxmox VE templates are available for download in the customer portal.


Why is this solution so exciting ?

When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.


Hold on this is too good to be true, what are the down sides ?

Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.


Requirements:

  • openQRM Community or Enterprise (a KVM is the suggested option)
  • optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management
  • CPU 64bit Intel EMT64 or AMD64
  • PCI(e) passthrough requires VT-d/AMD-d CPU flag support
  • Debian 11 Bullseye

Suggest minimum specification for:

  • openQRM Server: 1GB & 1 CPU
  • Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.
  • The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.

What is the ATU plugin ?

The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.


Let's Start:

1. Adding a Proxmox Kernel to openQRM:

  1. Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb
  2. Install Kernel locally
  3. then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)
    1. openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs
    2. openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor


2. Creating Image suitable to TMPFS Boot:

  1. Create Image - To create an image for Proxmox VE (image will be named "proxmox_image") which can be used as a tmpfs image, follow these steps;
    1. apt-get install debootstrap
    2. mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus
    3. debootstrap --arch amd64 buster /exports/proxmox_image/ https://deb.debian.org/debian/
    4. mount --bind /dev/pts /exports/proxmox_image/dev/pts
    5. mount --bind /dev/ /exports/proxmox_image/dev/
    6. mount --bind /proc /exports/proxmox_image/proc
    7. mount --make-rprivate /exports/proxmox_image/
    8. mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus
    9. chroot /exports/proxmox_image
    10. apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server
    11. dpkg-reconfigure locales
    12. Follow steps (Start at "Install Proxmox VE") @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
      1. We do not need to install grub
    13. set root password; passwd
    14. (optional) implement noclear for getty/inittab;
      1. mkdir -p /etc/systemd/system/getty@tty1.service.d/
      2. edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;

[Service]

TTYVTDisallocate=no

  1. Remember: /etc/hosts needs a valid hostname with your ip address
    1. This is managed with the ATU plugin
  2. exit chroot, type exit
  3. umount binds;
    1. umount /exports/proxmox_image/dev/pts
    2. umount /exports/proxmox_image/dev
    3. umount /exports/proxmox_image/proc
    4. umount /exports/proxmox_image/var/run/dbus
  4. (optional) If using the ATU Plugin follow these steps;
    1. (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;
      1. systemctl list-unit-files --root /exports/proxmox_image/  | grep enabled
    2. (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;
      1. /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/
      2. /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/
      3. /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/
      4. /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/
      5. /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service --root /exports/proxmox_image/
      6. /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/
      7. /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/
      8. /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/
      9. /bin/systemctl disable pveproxy.service pvestatd.service --root /exports/proxmox_image/
      10. /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service --root /exports/proxmox_image/
      11. /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/
      12. /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/
      13. /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/
      14. /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/
      15. /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/
      16. /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/
      17. If you have ceph installed disable;
        1. /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/
      18. If you have Ganesha installed for nfs;
        1. /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/
      19. /bin/systemctl disable puppet  --root /exports/proxmox_image/
      20. /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server  --root /exports/proxmox_image/
    3. (if using the ATU plugin) disable services: pvedaemon, pve-proxy, pve-manager, pve-cluster, cman, corosync, ceph, pvestatd, qemu-server, rrdcached, spiceproxy,
  5. Tar the Image;
    1. mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/
    2. cd /exports/proxmox_image
    3. tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .
  6. When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).


3. Configuring openQRM to support above template:

  1. Activate dhcpd plugin then the tftp plugin
  2. Activate NFS Storage (if not already done so)
    1. Under Plugins -> Storage -> NFS-Storage
    2. Add NFS Storage;
    3. name "openqrm-nfs"
    4. Deployment Type: "nfs-deployment"
  3. Add NFS Volume (this triggers tmpfs storage)
    1. Under Plugins -> Storage -> NFS-Storage -> Volume Admin -> Edit -> proxmox_image "ADD IMAGE"
  4. restart openQRM server/vm in case of duplicate services started from chroot image initialisation
  5. Now create a TmpFs-Storage: Plugins -> Storage -> Tmpfs-storage -> Volume Admin -> New Storage
    1. Name: openqrm-tmpfs
    2. Deployment Type: tmpfs-storage
  6. Now Create an Image: Components -> Image -> Add new Image -> Tmpfs-root deployment -> click edit on the "openqrm-tmpfs" -> Click "ADD NEW VOLUME"
    1. Name: pve7
    2. Size: 4 GB
    3. Description: proxmox ve 7
  7. Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an "idle" state and will be selectable as "idle" for this next step.
    1. Click "ADD A NEW SERVER"
    2. Select the resource
    3. then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)
    4. then click "Install from NAS/NFS" select the "proxmox_image" as above then click submit
    5. then select the kernel pve-5.11.22-6 then click submit
    6. Done
  8. You will then need to "start" the server, click "start", the idle resource will then reboot and boot the image as created above
  9. Once booted you may need to restart sshd and pve-cluster
    1. systemctl restart ssh pve-cluster


Notes/Customisations:

  1. Postfix may error a warning on boot, edit /etc/mailname
  2. Nodes booted with out the ATU plugin will lose configuration upon reboot!
  3. when changing kernel versions, a stop and start of the server is required

This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.

About the ATU Plugin:

The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems.



About openQRM:

openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.