Difference between revisions of "How to build Proxmox tmpfs image"

From openQRM
(/etc/hosts disclaimer)
(updating systemctl)
 
(71 intermediate revisions by the same user not shown)
Line 1: Line 1:
Once you have a successfully running openQRM Server you can follow the steps below for a Proxmox VE Solution.
+
Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system.  
  
These instructions will enable you to proof of concept the Proxmox VE Solution as a tmpfs deployment, however the clustering requires special initialisation. We have developed the ATU Plugin to orchestrate these steps for synchronized cluster initilisation on start and backup configuration on shutdown.
+
Once you have a running openQRM Server you can follow these steps.
  
# Download PVE Kernel - http://download.proxmox.com/debian/dists/buster/pve-no-subscription/binary-amd64/pve-kernel-5.4.114-1-pve_5.4.114-1_amd64.deb
+
This process is supported in both the community and enterprise versions of openQRM.
# Install Kernel
+
 
# Add Kernel to openQRM
+
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)
## (Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables) openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs  
+
 
## openqrm kernel add -n pve-5.4.114-1 -v 5.4.114-1-pve -u OPENQRM_USER -p OPENQRM_PASS -l / -i initramfs
+
Pre-built Proxmox VE templates are available for download in the customer portal.
### ''If you are using a self signed cert you may need to load the https call back manually;'' https://SERVER_NAME/openqrm/base/server/kernel/kernel-action.php?kernel_command=new_kernel&kernel_name=KERNEL_NAME&kernel_version=KERNEL_VER
+
 
# Create Image - To create an image for Proxmox which can be used as a tmpfs image, follow these steps;
+
 
## apt-get install debootstrap
+
'''About openQRM:'''
## Create directory mkdir -p /exports/proxmox_image/dev/pts
+
 
## debootstrap --arch amd64 buster /exports/proxmox_image/ <nowiki>https://deb.debian.org/debian/</nowiki>
+
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.
 +
 
 +
 
 +
 
 +
'''Why is this solution so exciting ?'''
 +
 
 +
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.
 +
 
 +
 
 +
'''Hold on this is too good to be true, what are the down sides ?'''
 +
 
 +
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.
 +
 
 +
 
 +
'''Requirements:'''
 +
* openQRM Community or Enterprise (a KVM is the suggested option)
 +
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management
 +
* CPU 64bit Intel EMT64 or AMD64
 +
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support
 +
* Debian 11 Bullseye
 +
'''Suggest minimum specification for:'''
 +
* openQRM Server: 1GB & 1 CPU
 +
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.
 +
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.
 +
'''What is the ATU plugin ?'''
 +
 
 +
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.
 +
 
 +
'''Ensure apparmor is removed;'''
 +
 
 +
apt remove --assume-yes --purge apparmor
 +
 
 +
'''<big>Let's Start:</big>'''
 +
 
 +
'''1. Adding a Proxmox Kernel to openQRM:'''
 +
# Download PVE Kernel (check to see if there is a newer kernel) -
 +
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]
 +
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]
 +
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&#x20;6.2.9-1&#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]
 +
# Install Kernel locally
 +
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)
 +
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor
 +
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor
 +
 
 +
 
 +
'''2. Creating Image suitable to TMPFS Boot:'''
 +
# Create Image - To create an image for Proxmox VE (image will be named "proxmox_image") which can be used as a tmpfs image, follow these steps;
 +
## apt-get -y install debootstrap
 +
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus
 +
## Boostrap with either Debian 11 or 12
 +
### Debian 11:
 +
#### debootstrap --arch amd64 buster /exports/proxmox_image/ <nowiki>https://deb.debian.org/debian/</nowiki>
 +
### Debian 12:
 +
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ <nowiki>https://deb.debian.org/debian/</nowiki>
 +
## mount --bind /dev/ /exports/proxmox_image/dev/
 
## mount --bind /dev/pts /exports/proxmox_image/dev/pts
 
## mount --bind /dev/pts /exports/proxmox_image/dev/pts
## mount --bind /dev/ /exports/proxmox_image/dev/
 
 
## mount --bind /proc /exports/proxmox_image/proc
 
## mount --bind /proc /exports/proxmox_image/proc
## mount --bind /exports/proxmox_image/ /exports/proxmox_image/
+
## #mount --make-rprivate /exports/proxmox_image/
## mount --make-rprivate /exports/proxmox_image/
 
 
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus
 
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus
 +
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/
 +
## #mount --make-rprivate /exports/proxmox_image/
 
## chroot /exports/proxmox_image
 
## chroot /exports/proxmox_image
## apt-get install wget net-tools screen locales collectd
+
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3 snmpd
 +
## apt-get install python-is-python3
 
## dpkg-reconfigure locales
 
## dpkg-reconfigure locales
## Follow steps (Start at "Install Proxmox VE") @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
+
## dpkg-reconfigure tzdata
### We do not need to install grub
+
## Follow steps (Start at "Install Proxmox VE") @  
## set root password; passwd
+
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
## implement noclear for getty/inittab;  
+
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
 +
#### We do not need to install grub or any other boot loaders
 +
## To install ceph support, add the relavent repository and add packages;
 +
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli
 +
## To add FRRouting add the relavent repository and add packages;
 +
### apt-get -y install frr frr-pythontools
 +
##'''set root password; passwd'''
 +
## (optional) implement noclear for getty/inittab;
 
### mkdir -p /etc/systemd/system/getty@tty1.service.d/
 
### mkdir -p /etc/systemd/system/getty@tty1.service.d/
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;<blockquote>[Service] TTYVTDisallocate=no</blockquote>
+
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;
## '''Remember: /etc/hosts needs a valid hostname with your ip address allocated by the dhcp server'''
+
<code>[Service]
### This is managed with the ATU plugin
+
 
## If using the ATU Plugin then disable services: pvedaemon, pve-proxy, pve-manager, pve-cluster, cman, corosync, ceph, pvestatd, qemu-server, rrdcached, spiceproxy,
+
TTYVTDisallocate=no
 +
</code>
 +
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''
 +
## This is managed with the ATU plugin
 +
#symlink ssh.service to sshd.service required for pve-cluster;
 +
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service
 +
# exit chroot, type exit
 +
# umount binds;
 +
## umount /exports/proxmox_image/dev/pts
 +
## umount /exports/proxmox_image/dev
 +
## umount /exports/proxmox_image/proc
 +
## umount /exports/proxmox_image/var/run/dbus
 +
# (optional) If using the ATU Plugin follow these steps;
 +
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;
 +
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled
 +
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;
 +
### /bin/systemctl disable rc-local --root /exports/proxmox_image/
 +
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service proxmox-boot-cleanup.service --root /exports/proxmox_image/
 +
### #/bin/systemctl disable ksm.service ksmtuned.service  --root /exports/proxmox_image/
 +
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/
 +
### /bin/systemctl disable iscsid.service open-iscsi.service  --root /exports/proxmox_image/
 +
### #/bin/systemctl disable iscsi.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pve-daily-update.timer --root /exports/proxmox_image/
 +
### #/bin/systemctl disable pvesr.timer --root /exports/proxmox_image/
 +
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/
 +
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/
 +
### /bin/systemctl disable rsyslog.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable dm-event.socket rbdmap.service --root /exports/proxmox_image/
 +
### #/bin/systemctl disable smartd.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable rsync.service --root /exports/proxmox_image/
 +
### #/bin/systemctl disable netdiag.service console-setup.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable dropbear nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/
 +
### #/bin/systemctl disable nfs-ganesha-lock --root /exports/proxmox_image/
 +
### /bin/systemctl disable nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/
 +
### If you have ceph installed disable;
 +
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/
 +
### If you have Ganesha installed for nfs;
 +
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable nfs-common.service --root /exports/proxmox_image/
 +
### /bin/systemctl disable puppet --root /exports/proxmox_image/
 +
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/
 +
### /bin/systemctl disable pve-firewall pvestatd pveproxy pvedaemon spiceproxy qmeventd rrdcached lxc pve-ha-crm pve-ha-lrm pve-lxc-syscalld lxcfs lxc-net lxc-monitord --root /exports/proxmox_image/
 +
## (if using the ATU plugin) disable services (some services may not exist):
 +
### /bin/systemctl disable pvedaemon pveproxy pve-cluster corosync pvestatd rrdcached spiceproxy --root /exports/proxmox_image/
 +
# Tar the Image;
 +
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/
 +
## cd /exports/proxmox_image
 +
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .
 +
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).
 +
 
 +
 
 +
'''3. Configuring openQRM to support above template:'''
 +
# Activate dhcpd plugin then the tftp plugin
 
# Activate NFS Storage (if not already done so)
 
# Activate NFS Storage (if not already done so)
 
## Under Plugins -> Storage -> NFS-Storage
 
## Under Plugins -> Storage -> NFS-Storage
Line 38: Line 161:
 
# Add NFS Volume (this triggers tmpfs storage)
 
# Add NFS Volume (this triggers tmpfs storage)
 
## Under Plugins -> Storage -> NFS-Storage -> Volume Admin -> Edit -> proxmox_image "ADD IMAGE"
 
## Under Plugins -> Storage -> NFS-Storage -> Volume Admin -> Edit -> proxmox_image "ADD IMAGE"
# restart server/vm incase of duplicate services started from chroot image initialisation
+
# <s>restart openQRM server/vm in case of duplicate services started from chroot image initialisation</s>
 
# Now create a TmpFs-Storage: Plugins -> Storage -> Tmpfs-storage -> Volume Admin -> New Storage
 
# Now create a TmpFs-Storage: Plugins -> Storage -> Tmpfs-storage -> Volume Admin -> New Storage
 
## Name: openqrm-tmpfs
 
## Name: openqrm-tmpfs
 
## Deployment Type: tmpfs-storage
 
## Deployment Type: tmpfs-storage
# Now Create an Image: Components -> Image  -> Add new Image -> Tmpfs-root deployment -> click edit on the "openqrm-tmpfs" -> Click "ADD NEW VOLUME"  
+
# Now Create an Image: Components -> Image  -> Add new Image -> Tmpfs-root deployment -> click edit on the "openqrm-tmpfs" -> Click "ADD NEW VOLUME"
## Name: pve5
+
## Name: pve7
## Size: 3 GB
+
## Size: 4 GB
## Description: proxmox ve 5
+
## Description: proxmox ve 7
# Now you will need to link a resource to a server. A resource is a blank system/server/chassis and a Server is a configuration applied to a resource/blank system. So you can either manually add a server or if a system has booted via dhcp/pxe then that system will be selectable and named "idle" for this next step.
+
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an "idle" state and will be selectable as "idle" for this next step.
 
## Click "ADD A NEW SERVER"
 
## Click "ADD A NEW SERVER"
## Select the resource or manually setup a server
+
## Select the resource
## then select n image for server, select the pve5 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.
+
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)
## then select "Install from NAS/NFS" select the "proxmox_image" as above then click submit
+
## then click "Install from NAS/NFS" select the "proxmox_image" as above then click submit
## then select the kernel 5.4.114-1-pve then click submit
+
## then select the kernel pve-5.11.22-6 then click submit
 
## Done
 
## Done
# Tar Image
+
# You will then need to "start" the server, click "start", the idle resource will then reboot and boot the image as created above
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/
+
# Once booted you may need to restart sshd and pve-cluster
## cd /exports/proxmox_image
+
## systemctl restart ssh pve-cluster
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .
+
 
# Create NFS Image link to TmpFS image
+
 
 +
'''Notes/Customisations:'''
 +
# Postfix may error a warning on boot, edit /etc/mailname
 +
#'''<u>Nodes booted with out the ATU plugin will lose configuration upon reboot!</u>'''
 +
# when changing kernel versions, a stop and start of the server is required
  
Customisations:
+
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.
  
implement noclear for getty/inittab
+
'''About the ATU Plugin:'''
  
set root password (otherwise not able to login)
+
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems.
  
  
Optional:
 
  
The ATU Plugin is optimised for Proxmox Cluster Deployments and TMPFS Server Configuration Sync Initialise ATU plugin
 
  
Create custom FS
+
'''About openQRM:'''
  
/exports/custom/{fstab|modules|network}
+
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.
 +
[[Category:Howto]]
 +
[[Category:Tutorial]]
 +
[[Category:Debian]]

Latest revision as of 15:50, 17 September 2024

Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system.

Once you have a running openQRM Server you can follow these steps.

This process is supported in both the community and enterprise versions of openQRM.

You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)

Pre-built Proxmox VE templates are available for download in the customer portal.


About openQRM:

openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.


Why is this solution so exciting ?

When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.


Hold on this is too good to be true, what are the down sides ?

Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.


Requirements:

  • openQRM Community or Enterprise (a KVM is the suggested option)
  • optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management
  • CPU 64bit Intel EMT64 or AMD64
  • PCI(e) passthrough requires VT-d/AMD-d CPU flag support
  • Debian 11 Bullseye

Suggest minimum specification for:

  • openQRM Server: 1GB & 1 CPU
  • Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.
  • The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.

What is the ATU plugin ?

The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.

Ensure apparmor is removed;

apt remove --assume-yes --purge apparmor

Let's Start:

1. Adding a Proxmox Kernel to openQRM:

  1. Download PVE Kernel (check to see if there is a newer kernel) -
    1. Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb
    2. Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb
    3. Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb
  2. Install Kernel locally
  3. then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)
    1. /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor
    2. /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor


2. Creating Image suitable to TMPFS Boot:

  1. Create Image - To create an image for Proxmox VE (image will be named "proxmox_image") which can be used as a tmpfs image, follow these steps;
    1. apt-get -y install debootstrap
    2. mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus
    3. Boostrap with either Debian 11 or 12
      1. Debian 11:
        1. debootstrap --arch amd64 buster /exports/proxmox_image/ https://deb.debian.org/debian/
      2. Debian 12:
        1. debootstrap --arch amd64 bookworm /exports/proxmox_image/ https://deb.debian.org/debian/
    4. mount --bind /dev/ /exports/proxmox_image/dev/
    5. mount --bind /dev/pts /exports/proxmox_image/dev/pts
    6. mount --bind /proc /exports/proxmox_image/proc
    7. #mount --make-rprivate /exports/proxmox_image/
    8. mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus
    9. #mount --bind /exports/proxmox_image/ /exports/proxmox_image/
    10. #mount --make-rprivate /exports/proxmox_image/
    11. chroot /exports/proxmox_image
    12. apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3 snmpd
    13. apt-get install python-is-python3
    14. dpkg-reconfigure locales
    15. dpkg-reconfigure tzdata
    16. Follow steps (Start at "Install Proxmox VE") @
      1. Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
      2. Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
        1. We do not need to install grub or any other boot loaders
    17. To install ceph support, add the relavent repository and add packages;
      1. apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli
    18. To add FRRouting add the relavent repository and add packages;
      1. apt-get -y install frr frr-pythontools
    19. set root password; passwd
    20. (optional) implement noclear for getty/inittab;
      1. mkdir -p /etc/systemd/system/getty@tty1.service.d/
      2. edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;

[Service]

TTYVTDisallocate=no

  1. Remember: /etc/hosts needs a valid hostname with your ip address
    1. This is managed with the ATU plugin
  2. symlink ssh.service to sshd.service required for pve-cluster;
    1. ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service
  3. exit chroot, type exit
  4. umount binds;
    1. umount /exports/proxmox_image/dev/pts
    2. umount /exports/proxmox_image/dev
    3. umount /exports/proxmox_image/proc
    4. umount /exports/proxmox_image/var/run/dbus
  5. (optional) If using the ATU Plugin follow these steps;
    1. (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;
      1. systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled
    2. (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;
      1. /bin/systemctl disable rc-local --root /exports/proxmox_image/
      2. /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service proxmox-boot-cleanup.service --root /exports/proxmox_image/
      3. #/bin/systemctl disable ksm.service ksmtuned.service --root /exports/proxmox_image/
      4. /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/
      5. /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/
      6. /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/
      7. /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/
      8. /bin/systemctl disable iscsid.service open-iscsi.service --root /exports/proxmox_image/
      9. #/bin/systemctl disable iscsi.service --root /exports/proxmox_image/
      10. /bin/systemctl disable pve-firewall.service pvefw-logger.service pve-daily-update.timer --root /exports/proxmox_image/
      11. #/bin/systemctl disable pvesr.timer --root /exports/proxmox_image/
      12. /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/
      13. /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/
      14. /bin/systemctl disable pveproxy.service pvestatd.service --root /exports/proxmox_image/
      15. /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service --root /exports/proxmox_image/
      16. /bin/systemctl disable rsyslog.service --root /exports/proxmox_image/
      17. /bin/systemctl disable dm-event.socket rbdmap.service --root /exports/proxmox_image/
      18. #/bin/systemctl disable smartd.service --root /exports/proxmox_image/
      19. /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/
      20. /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service --root /exports/proxmox_image/
      21. /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/
      22. /bin/systemctl disable rsync.service --root /exports/proxmox_image/
      23. #/bin/systemctl disable netdiag.service console-setup.service --root /exports/proxmox_image/
      24. /bin/systemctl disable dropbear nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/
      25. #/bin/systemctl disable nfs-ganesha-lock --root /exports/proxmox_image/
      26. /bin/systemctl disable nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/
      27. /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/
      28. If you have ceph installed disable;
        1. /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/
      29. If you have Ganesha installed for nfs;
        1. /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service --root /exports/proxmox_image/
      30. /bin/systemctl disable nfs-common.service --root /exports/proxmox_image/
      31. /bin/systemctl disable puppet --root /exports/proxmox_image/
      32. /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/
      33. /bin/systemctl disable pve-firewall pvestatd pveproxy pvedaemon spiceproxy qmeventd rrdcached lxc pve-ha-crm pve-ha-lrm pve-lxc-syscalld lxcfs lxc-net lxc-monitord --root /exports/proxmox_image/
    3. (if using the ATU plugin) disable services (some services may not exist):
      1. /bin/systemctl disable pvedaemon pveproxy pve-cluster corosync pvestatd rrdcached spiceproxy --root /exports/proxmox_image/
  6. Tar the Image;
    1. mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/
    2. cd /exports/proxmox_image
    3. tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .
  7. When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).


3. Configuring openQRM to support above template:

  1. Activate dhcpd plugin then the tftp plugin
  2. Activate NFS Storage (if not already done so)
    1. Under Plugins -> Storage -> NFS-Storage
    2. Add NFS Storage;
    3. name "openqrm-nfs"
    4. Deployment Type: "nfs-deployment"
  3. Add NFS Volume (this triggers tmpfs storage)
    1. Under Plugins -> Storage -> NFS-Storage -> Volume Admin -> Edit -> proxmox_image "ADD IMAGE"
  4. restart openQRM server/vm in case of duplicate services started from chroot image initialisation
  5. Now create a TmpFs-Storage: Plugins -> Storage -> Tmpfs-storage -> Volume Admin -> New Storage
    1. Name: openqrm-tmpfs
    2. Deployment Type: tmpfs-storage
  6. Now Create an Image: Components -> Image -> Add new Image -> Tmpfs-root deployment -> click edit on the "openqrm-tmpfs" -> Click "ADD NEW VOLUME"
    1. Name: pve7
    2. Size: 4 GB
    3. Description: proxmox ve 7
  7. Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an "idle" state and will be selectable as "idle" for this next step.
    1. Click "ADD A NEW SERVER"
    2. Select the resource
    3. then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)
    4. then click "Install from NAS/NFS" select the "proxmox_image" as above then click submit
    5. then select the kernel pve-5.11.22-6 then click submit
    6. Done
  8. You will then need to "start" the server, click "start", the idle resource will then reboot and boot the image as created above
  9. Once booted you may need to restart sshd and pve-cluster
    1. systemctl restart ssh pve-cluster


Notes/Customisations:

  1. Postfix may error a warning on boot, edit /etc/mailname
  2. Nodes booted with out the ATU plugin will lose configuration upon reboot!
  3. when changing kernel versions, a stop and start of the server is required

This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.

About the ATU Plugin:

The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems.



About openQRM:

openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.