<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.openqrm-enterprise.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Stvsyf</id>
	<title>openQRM - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.openqrm-enterprise.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Stvsyf"/>
	<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/view/Special:Contributions/Stvsyf"/>
	<updated>2026-04-08T14:11:45Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.9</generator>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=973</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=973"/>
		<updated>2025-02-23T09:53:14Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: change ownership of /var/log/pveproxy/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## mount --bind /tmp /exports/proxmox_image/tmp&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3 snmpd&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
#Remember to change ownership of /var/log/pveproxy/&lt;br /&gt;
##chown -R www-data:www-data /var/log/pveproxy&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## umount /exports/proxmox_image/tmp&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable ksm.service ksmtuned.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable iscsi.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable pvesr.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable smartd.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsync.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable netdiag.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dropbear nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable nfs-ganesha-lock --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall pvestatd pveproxy pvedaemon spiceproxy qmeventd rrdcached lxc pve-ha-crm pve-ha-lrm pve-lxc-syscalld lxcfs lxc-net lxc-monitord --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pveproxy pve-cluster corosync pvestatd rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=972</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=972"/>
		<updated>2024-11-25T21:35:27Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: added tmp mounts&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## mount --bind /tmp /exports/proxmox_image/tmp&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3 snmpd&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## umount /exports/proxmox_image/tmp&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable ksm.service ksmtuned.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable iscsi.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable pvesr.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable smartd.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsync.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable netdiag.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dropbear nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable nfs-ganesha-lock --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall pvestatd pveproxy pvedaemon spiceproxy qmeventd rrdcached lxc pve-ha-crm pve-ha-lrm pve-lxc-syscalld lxcfs lxc-net lxc-monitord --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pveproxy pve-cluster corosync pvestatd rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=971</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=971"/>
		<updated>2024-09-17T04:50:10Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: updating systemctl&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3 snmpd&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable ksm.service ksmtuned.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable iscsi.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable pvesr.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable smartd.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsync.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable netdiag.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dropbear nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable nfs-ganesha-lock --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall pvestatd pveproxy pvedaemon spiceproxy qmeventd rrdcached lxc pve-ha-crm pve-ha-lrm pve-lxc-syscalld lxcfs lxc-net lxc-monitord --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pveproxy pve-cluster corosync pvestatd rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=970</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=970"/>
		<updated>2024-09-17T04:23:33Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: remove deprecated &amp;amp; non-existant services&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3 snmpd&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable ksm.service ksmtuned.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable iscsi.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable pvesr.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable smartd.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsync.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable netdiag.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable dropbear nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### #/bin/systemctl disable nfs-ganesha-lock --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pveproxy pve-cluster corosync pvestatd rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=969</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=969"/>
		<updated>2024-08-20T04:55:41Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding snmpd&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3 snmpd&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=968</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=968"/>
		<updated>2024-08-19T05:41:13Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=967</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=967"/>
		<updated>2024-08-19T05:40:31Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;[Service]&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=966</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=966"/>
		<updated>2024-08-19T05:39:29Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
[Service]&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=965</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=965"/>
		<updated>2024-08-19T05:14:10Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding python3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common python3&lt;br /&gt;
## apt-get install python-is-python3&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=964</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=964"/>
		<updated>2024-08-19T03:30:16Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: remove apparmor&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
'''Ensure apparmor is removed;'''&lt;br /&gt;
&lt;br /&gt;
apt remove --assume-yes --purge apparmor&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=963</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=963"/>
		<updated>2024-08-19T00:47:32Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: comment out bind bind and make-rprivate&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## #mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=962</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=962"/>
		<updated>2024-08-19T00:39:17Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: add extra bind and make-rprivate for debian 12&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## mount --bind /exports/proxmox_image/ /exports/proxmox_image/&lt;br /&gt;
## mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=961</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=961"/>
		<updated>2024-08-18T23:44:07Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: update docs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## Boostrap with either Debian 11 or 12&lt;br /&gt;
### Debian 11:&lt;br /&gt;
#### debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
### Debian 12:&lt;br /&gt;
#### debootstrap --arch amd64 bookworm /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get update; apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=960</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=960"/>
		<updated>2024-07-03T02:09:38Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding Debian 12 support&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) -&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb Proxmox 8 - http://download.proxmox.com/debian/dists/bookworm/pve-no-subscription/binary-amd64/proxmox-kernel-6.5.13-5-pve_6.5.13-5_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb Proxmox 7 (Stable) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb]&lt;br /&gt;
## [http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve&amp;amp;#x20;6.2.9-1&amp;amp;#x20;amd64.deb Proxmox 7 (Testing) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-6.2.9-1-pve_6.2.9-1_amd64.deb]&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get -y install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ &lt;br /&gt;
### Proxmox 7 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### Proxmox 8 - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm&lt;br /&gt;
#### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get -y install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=959</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=959"/>
		<updated>2024-07-03T02:02:39Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding yes option to apt-get&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install -y debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=958</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=958"/>
		<updated>2024-01-14T19:30:41Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding about openqrm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:'''&lt;br /&gt;
&lt;br /&gt;
openQRM Enterprise is a Turn Key Deployment and Management Platform, with over 55 plugins allowing variety of deployment options. This article describes the deployment methods to convert Proxmox into a tmpfs image allowing servers to PXE boot and Run Proxmox as a memory resident operating system requiring now attached storage. This is perfect for compute nodes and allow KVM and LXC to operate as normal. Proxmox can connect to a variety of storage options including; NFS, Ceph, Gluster, iSCSI and more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=957</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=957"/>
		<updated>2023-12-20T23:45:20Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: disable rc.local&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable rc-local --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=956</id>
		<title>Ceph OSD Creation Failure</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=956"/>
		<updated>2023-12-19T19:38:22Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: update osds&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Ceph]]&lt;br /&gt;
&lt;br /&gt;
Sometimes the bootstrap-osd keyring can not be found.&lt;br /&gt;
&lt;br /&gt;
Run this command;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;ceph auth get client.bootstrap-osd &amp;gt; /var/lib/ceph/bootstrap-osd/ceph.keyring&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To rebuild /var/lib/ceph/osd, try this command;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;ceph-volume lvm activate --all&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=955</id>
		<title>Ceph OSD Creation Failure</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=955"/>
		<updated>2023-12-19T06:03:37Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding ceph volume lvm activate&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Ceph]]&lt;br /&gt;
&lt;br /&gt;
Sometimes the bootstrap-osd keyring can not be found.&lt;br /&gt;
&lt;br /&gt;
Run this command;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;ceph auth get client.bootstrap-osd &amp;gt; /var/lib/ceph/bootstrap-osd/ceph.keyring&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes when running a tmpfs node you may lose /var/lib/ceph/osd, try this command;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;ceph-volume lvm activate --all&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=954</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=954"/>
		<updated>2023-12-18T00:56:39Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: disable frr pvenetcommit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable frr.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=953</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=953"/>
		<updated>2023-12-17T11:31:47Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: remove disable ssh.service&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=952</id>
		<title>Ceph OSD Creation Failure</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=952"/>
		<updated>2023-12-17T05:49:47Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Ceph]]&lt;br /&gt;
&lt;br /&gt;
Sometimes the bootstrap-osd keyring can not be found.&lt;br /&gt;
&lt;br /&gt;
Run this command;&lt;br /&gt;
&lt;br /&gt;
ceph auth get client.bootstrap-osd &amp;gt; /var/lib/ceph/bootstrap-osd/ceph.keyring&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=951</id>
		<title>Ceph OSD Creation Failure</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=951"/>
		<updated>2023-12-17T05:49:05Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Tutorials]]&lt;br /&gt;
Sometimes the bootstrap-osd keyring can not be found.&lt;br /&gt;
&lt;br /&gt;
Run this command;&lt;br /&gt;
&lt;br /&gt;
ceph auth get client.bootstrap-osd &amp;gt; /var/lib/ceph/bootstrap-osd/ceph.keyring&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=950</id>
		<title>Ceph OSD Creation Failure</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Ceph_OSD_Creation_Failure&amp;diff=950"/>
		<updated>2023-12-17T05:48:39Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: Ceph OSD Creation Failure&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Sometimes the bootstrap-osd keyring can not be found.&lt;br /&gt;
&lt;br /&gt;
Run this command;&lt;br /&gt;
&lt;br /&gt;
ceph auth get client.bootstrap-osd &amp;gt; /var/lib/ceph/bootstrap-osd/ceph.keyring&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=949</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=949"/>
		<updated>2023-12-17T04:34:16Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: disable extra services managed through ATU plugin&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/| grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service collectd.service ksm.service ksmtuned.service proxmox-boot-cleanup.service ssh.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=948</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=948"/>
		<updated>2023-12-15T00:25:53Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding zsh-common dependency&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail zsh-common&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=947</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=947"/>
		<updated>2023-12-14T23:34:33Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: symlink ssh to sshd&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
#symlink ssh.service to sshd.service required for pve-cluster;&lt;br /&gt;
##ln -s /usr/lib/systemd/system/ssh.service /etc/systemd/system/sshd.service&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=946</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=946"/>
		<updated>2023-12-11T00:31:16Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: add procmail dependency&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool procmail&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_an_initrd&amp;diff=945</id>
		<title>How to unpack and pack an initrd</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_an_initrd&amp;diff=945"/>
		<updated>2023-11-27T22:31:08Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:tutorial]]&lt;br /&gt;
&lt;br /&gt;
Brief instructions to unpack and repack an image as an initrd;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Unpack an initrd'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;unmkinitramfs initrd .&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Pack an initrd'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cd initrd&lt;br /&gt;
&lt;br /&gt;
find . | cpio -o -H newc | gzip -9 &amp;gt; ../KERNEL_NAME.img&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Configuring_IPMI_under_Linux_using_ipmitool&amp;diff=944</id>
		<title>Configuring IPMI under Linux using ipmitool</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Configuring_IPMI_under_Linux_using_ipmitool&amp;diff=944"/>
		<updated>2023-11-27T22:29:48Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding category tutorial&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:tutorial]]&lt;br /&gt;
&lt;br /&gt;
﻿Under Linux, the &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; command (&amp;lt;nowiki&amp;gt;http://ipmitool.sourceforge.net/&amp;lt;/nowiki&amp;gt;) can be used for configuring IPMI for a server.&lt;br /&gt;
&lt;br /&gt;
== Contents ==&lt;br /&gt;
&lt;br /&gt;
* 1Hardware and Software Requirements&lt;br /&gt;
* 2LAN Configuration&lt;br /&gt;
** 2.1ipmitool lan print 1&lt;br /&gt;
* 3User Configuration&lt;br /&gt;
** 3.1Users at the USER Privilege Level&lt;br /&gt;
&lt;br /&gt;
== Hardware and Software Requirements ==&lt;br /&gt;
The following example will show how to configure IPMI on a Linux server. The /dev/ipmi0 device file must exist so that configuration can be carried out. If it does not exist, you can create it as follows:&lt;br /&gt;
&lt;br /&gt;
* under SuSE, Red Hat or CentOS: &amp;lt;code&amp;gt;/etc/init.d/ipmi start&amp;lt;/code&amp;gt; (requires the OpenIPMI package. The OpenIPMI-tools package will be required later, as well.)&lt;br /&gt;
* under Debian 4: &amp;lt;code&amp;gt;/usr/share/ipmitool/ipmi.init.basic&amp;lt;/code&amp;gt; (If the error message, ''ipmi_kcs_drv not found'', appears, you will have to comment the corresponding if-condition out, see also [1].)&lt;br /&gt;
* under Debian 5: &amp;lt;code&amp;gt;modprobe ipmi_devintf; modprobe ipmi_si&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The approach described below has been tested on an Intel SR2500 under CentOS 4 using ipmitool version 1.8.7. In principle, the configuration should be configured similarly on other systems with IPMI support.&lt;br /&gt;
&lt;br /&gt;
== ipmitool lan print 1 ==&lt;br /&gt;
You can check the configuration using &amp;lt;code&amp;gt;ipmitool lan print 1&amp;lt;/code&amp;gt;.&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan print 1&lt;br /&gt;
 Set in Progress         : Set Complete&lt;br /&gt;
 Auth Type Support       : NONE MD5 PASSWORD &lt;br /&gt;
 Auth Type Enable        : Callback : &lt;br /&gt;
                         : User     : &lt;br /&gt;
                         : Operator : &lt;br /&gt;
                         : Admin    : MD5 &lt;br /&gt;
                         : OEM      : &lt;br /&gt;
 IP Address Source       : Static Address&lt;br /&gt;
 IP Address              : 192.168.1.211&lt;br /&gt;
 Subnet Mask             : 255.255.255.0&lt;br /&gt;
 MAC Address             : 00:0e:0c:ea:92:a2&lt;br /&gt;
 SNMP Community String   : &lt;br /&gt;
 IP Header               : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10&lt;br /&gt;
 BMC ARP Control         : ARP Responses Enabled, Gratuitous ARP Disabled&lt;br /&gt;
 Gratituous ARP Intrvl   : 2.0 seconds&lt;br /&gt;
 Default Gateway IP      : 192.168.1.254&lt;br /&gt;
 Default Gateway MAC     : 00:0e:0c:aa:8e:13&lt;br /&gt;
 Backup Gateway IP       : 0.0.0.0&lt;br /&gt;
 Backup Gateway MAC      : 00:00:00:00:00:00&lt;br /&gt;
 RMCP+ Cipher Suites     : None&lt;br /&gt;
 Cipher Suite Priv Max   : XXXXXXXXXXXXXXX&lt;br /&gt;
                         :     X=Cipher Suite Unused&lt;br /&gt;
                         :     c=CALLBACK&lt;br /&gt;
                         :     u=USER&lt;br /&gt;
                         :     o=OPERATOR&lt;br /&gt;
                         :     a=ADMIN&lt;br /&gt;
                         :     O=OEM&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
&lt;br /&gt;
== LAN Configuration ==&lt;br /&gt;
The first IPMI LAN channel will now be configured. Thereby, the configured IP address can be accessed at the first LAN port for the server. For the default gateway, both its IP address and MAC address must be configured.&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 ipsrc static&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 ipaddr 192.168.1.211&lt;br /&gt;
 Setting LAN IP Address to 192.168.1.211&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 netmask 255.255.255.0&lt;br /&gt;
 Setting LAN Subnet Mask to 255.255.255.0&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 defgw ipaddr 192.168.1.254&lt;br /&gt;
 Setting LAN Default Gateway IP to 192.168.1.254&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 defgw macaddr 00:0e:0c:aa:8e:13&lt;br /&gt;
 Setting LAN Default Gateway MAC to 00:0e:0c:aa:8e:13&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 arp respond on&lt;br /&gt;
 Enabling BMC-generated ARP responses&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 auth ADMIN MD5&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 access on&lt;br /&gt;
When configuring LANs, older versions of ipmitool would not automatically reset ''Set in Progress'' to ''Set Complete''. This can be done manually using a raw command (regarding this, see &amp;lt;nowiki&amp;gt;http://www.mail-archive.com/ipmitool-devel@lists.sourceforge.net/msg00095.html&amp;lt;/nowiki&amp;gt;)&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan print 1&lt;br /&gt;
 Set in Progress         : Set In Progress&lt;br /&gt;
 [...]&lt;br /&gt;
 [root@sr2500 ~]# ipmitool raw 0x0c 1 1 0 0&lt;br /&gt;
&lt;br /&gt;
=== User Configuration ===&lt;br /&gt;
A user will now be setup with admin rights.&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set name 2 admin&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set password 2&lt;br /&gt;
 Password for user 2: &lt;br /&gt;
 Password for user 2: &lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel setaccess 1 2 link=on ipmi=on callin=on privilege=4&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user enable 2&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
The server can now be controlled by this user as described in Using ipmitool for Remote Control of Servers.&lt;br /&gt;
&lt;br /&gt;
=== Users at the USER Privilege Level ===&lt;br /&gt;
If a user should only be used for querying sensor data, a custom privilege level can be setup for that. This user then has no rights for activating or deactivating the server, for example. A user named monitor will be created for this in the following example:&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set name 3 monitor&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set password 3&lt;br /&gt;
 Password for user 3: &lt;br /&gt;
 Password for user 3: &lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel setaccess 1 3 link=on ipmi=on callin=on privilege=2&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user enable 3&lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel getaccess 1 3&lt;br /&gt;
 Maximum User IDs     : 15&lt;br /&gt;
 Enabled User IDs     : 2&lt;br /&gt;
 &lt;br /&gt;
 User ID              : 3&lt;br /&gt;
 User Name            : monitor&lt;br /&gt;
 Fixed Name           : No&lt;br /&gt;
 Access Available     : call-in / callback&lt;br /&gt;
 Link Authentication  : enabled&lt;br /&gt;
 IPMI Messaging       : enabled&lt;br /&gt;
 Privilege Level      : USER&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
The importance of the various privilege numbers will be displayed when &amp;lt;code&amp;gt;ipmitool channel&amp;lt;/code&amp;gt; is called without any additional parameters:&lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel&lt;br /&gt;
 Channel Commands: authcap   &amp;lt;channel number&amp;gt; &amp;lt;max privilege&amp;gt;&lt;br /&gt;
                   getaccess &amp;lt;channel number&amp;gt; [user id]&lt;br /&gt;
                   setaccess &amp;lt;channel number&amp;gt; &amp;lt;user id&amp;gt; [callin=on|off] [ipmi=on|off] [link=on|off] [privilege=level]&lt;br /&gt;
                   info      [channel number]&lt;br /&gt;
                   getciphers &amp;lt;ipmi | sol&amp;gt; [channel]&lt;br /&gt;
 &lt;br /&gt;
 Possible privilege levels are:&lt;br /&gt;
    1   Callback level&lt;br /&gt;
    2   User level&lt;br /&gt;
    3   Operator level&lt;br /&gt;
    4   Administrator level&lt;br /&gt;
    5   OEM Proprietary level&lt;br /&gt;
   15   No access&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
The user just created (named 'monitor') has been assigned the USER privilege level. So that LAN access is allowed for this user, you must activate MD5 authentication for LAN access for this user group (USER privilege level):&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 auth USER MD5&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
MD5 will now also be listed as User Auth Type Enable for LAN Channel 1:&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan print 1&lt;br /&gt;
 Set in Progress         : Set Complete&lt;br /&gt;
 Auth Type Support       : NONE MD5 PASSWORD &lt;br /&gt;
 Auth Type Enable        : Callback : &lt;br /&gt;
                         : User     : MD5 &lt;br /&gt;
                         : Operator : &lt;br /&gt;
                         : Admin    : MD5 &lt;br /&gt;
                         : OEM      : &lt;br /&gt;
 IP Address Source       : Static Address&lt;br /&gt;
 IP Address              : 192.168.1.211&lt;br /&gt;
 Subnet Mask             : 255.255.255.0&lt;br /&gt;
 MAC Address             : 00:0e:0c:ea:92:a2&lt;br /&gt;
 SNMP Community String   : &lt;br /&gt;
 IP Header               : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10&lt;br /&gt;
 BMC ARP Control         : ARP Responses Enabled, Gratuitous ARP Disabled&lt;br /&gt;
 Gratituous ARP Intrvl   : 2.0 seconds&lt;br /&gt;
 Default Gateway IP      : 192.168.1.254&lt;br /&gt;
 Default Gateway MAC     : 00:0e:0c:aa:8e:13&lt;br /&gt;
 Backup Gateway IP       : 0.0.0.0&lt;br /&gt;
 Backup Gateway MAC      : 00:00:00:00:00:00&lt;br /&gt;
 RMCP+ Cipher Suites     : 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14&lt;br /&gt;
 Cipher Suite Priv Max   : XXXXXXXXXXXXXXX&lt;br /&gt;
                         :     X=Cipher Suite Unused&lt;br /&gt;
                         :     c=CALLBACK&lt;br /&gt;
                         :     u=USER&lt;br /&gt;
                         :     o=OPERATOR&lt;br /&gt;
                         :     a=ADMIN&lt;br /&gt;
                         :     O=OEM&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
'''Please specify the option &amp;quot;-L USER&amp;quot; for ipmitool when using a user with USER privilege.''' Otherwise you will get an error message stating:&lt;br /&gt;
 Activate Session error: Requested privilege level exceeds limit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Attribution: https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Configuring_IPMI_under_Linux_using_ipmitool&amp;diff=943</id>
		<title>Configuring IPMI under Linux using ipmitool</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Configuring_IPMI_under_Linux_using_ipmitool&amp;diff=943"/>
		<updated>2023-11-27T22:29:21Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: Configuring IPMI under Linux using ipmitool&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
﻿Under Linux, the &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; command (&amp;lt;nowiki&amp;gt;http://ipmitool.sourceforge.net/&amp;lt;/nowiki&amp;gt;) can be used for configuring IPMI for a server.&lt;br /&gt;
&lt;br /&gt;
== Contents ==&lt;br /&gt;
&lt;br /&gt;
* 1Hardware and Software Requirements&lt;br /&gt;
* 2LAN Configuration&lt;br /&gt;
** 2.1ipmitool lan print 1&lt;br /&gt;
* 3User Configuration&lt;br /&gt;
** 3.1Users at the USER Privilege Level&lt;br /&gt;
&lt;br /&gt;
== Hardware and Software Requirements ==&lt;br /&gt;
The following example will show how to configure IPMI on a Linux server. The /dev/ipmi0 device file must exist so that configuration can be carried out. If it does not exist, you can create it as follows:&lt;br /&gt;
&lt;br /&gt;
* under SuSE, Red Hat or CentOS: &amp;lt;code&amp;gt;/etc/init.d/ipmi start&amp;lt;/code&amp;gt; (requires the OpenIPMI package. The OpenIPMI-tools package will be required later, as well.)&lt;br /&gt;
* under Debian 4: &amp;lt;code&amp;gt;/usr/share/ipmitool/ipmi.init.basic&amp;lt;/code&amp;gt; (If the error message, ''ipmi_kcs_drv not found'', appears, you will have to comment the corresponding if-condition out, see also [1].)&lt;br /&gt;
* under Debian 5: &amp;lt;code&amp;gt;modprobe ipmi_devintf; modprobe ipmi_si&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The approach described below has been tested on an Intel SR2500 under CentOS 4 using ipmitool version 1.8.7. In principle, the configuration should be configured similarly on other systems with IPMI support.&lt;br /&gt;
&lt;br /&gt;
== ipmitool lan print 1 ==&lt;br /&gt;
You can check the configuration using &amp;lt;code&amp;gt;ipmitool lan print 1&amp;lt;/code&amp;gt;.&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan print 1&lt;br /&gt;
 Set in Progress         : Set Complete&lt;br /&gt;
 Auth Type Support       : NONE MD5 PASSWORD &lt;br /&gt;
 Auth Type Enable        : Callback : &lt;br /&gt;
                         : User     : &lt;br /&gt;
                         : Operator : &lt;br /&gt;
                         : Admin    : MD5 &lt;br /&gt;
                         : OEM      : &lt;br /&gt;
 IP Address Source       : Static Address&lt;br /&gt;
 IP Address              : 192.168.1.211&lt;br /&gt;
 Subnet Mask             : 255.255.255.0&lt;br /&gt;
 MAC Address             : 00:0e:0c:ea:92:a2&lt;br /&gt;
 SNMP Community String   : &lt;br /&gt;
 IP Header               : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10&lt;br /&gt;
 BMC ARP Control         : ARP Responses Enabled, Gratuitous ARP Disabled&lt;br /&gt;
 Gratituous ARP Intrvl   : 2.0 seconds&lt;br /&gt;
 Default Gateway IP      : 192.168.1.254&lt;br /&gt;
 Default Gateway MAC     : 00:0e:0c:aa:8e:13&lt;br /&gt;
 Backup Gateway IP       : 0.0.0.0&lt;br /&gt;
 Backup Gateway MAC      : 00:00:00:00:00:00&lt;br /&gt;
 RMCP+ Cipher Suites     : None&lt;br /&gt;
 Cipher Suite Priv Max   : XXXXXXXXXXXXXXX&lt;br /&gt;
                         :     X=Cipher Suite Unused&lt;br /&gt;
                         :     c=CALLBACK&lt;br /&gt;
                         :     u=USER&lt;br /&gt;
                         :     o=OPERATOR&lt;br /&gt;
                         :     a=ADMIN&lt;br /&gt;
                         :     O=OEM&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
&lt;br /&gt;
== LAN Configuration ==&lt;br /&gt;
The first IPMI LAN channel will now be configured. Thereby, the configured IP address can be accessed at the first LAN port for the server. For the default gateway, both its IP address and MAC address must be configured.&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 ipsrc static&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 ipaddr 192.168.1.211&lt;br /&gt;
 Setting LAN IP Address to 192.168.1.211&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 netmask 255.255.255.0&lt;br /&gt;
 Setting LAN Subnet Mask to 255.255.255.0&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 defgw ipaddr 192.168.1.254&lt;br /&gt;
 Setting LAN Default Gateway IP to 192.168.1.254&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 defgw macaddr 00:0e:0c:aa:8e:13&lt;br /&gt;
 Setting LAN Default Gateway MAC to 00:0e:0c:aa:8e:13&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 arp respond on&lt;br /&gt;
 Enabling BMC-generated ARP responses&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 auth ADMIN MD5&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 access on&lt;br /&gt;
When configuring LANs, older versions of ipmitool would not automatically reset ''Set in Progress'' to ''Set Complete''. This can be done manually using a raw command (regarding this, see &amp;lt;nowiki&amp;gt;http://www.mail-archive.com/ipmitool-devel@lists.sourceforge.net/msg00095.html&amp;lt;/nowiki&amp;gt;)&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan print 1&lt;br /&gt;
 Set in Progress         : Set In Progress&lt;br /&gt;
 [...]&lt;br /&gt;
 [root@sr2500 ~]# ipmitool raw 0x0c 1 1 0 0&lt;br /&gt;
&lt;br /&gt;
=== User Configuration ===&lt;br /&gt;
A user will now be setup with admin rights.&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set name 2 admin&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set password 2&lt;br /&gt;
 Password for user 2: &lt;br /&gt;
 Password for user 2: &lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel setaccess 1 2 link=on ipmi=on callin=on privilege=4&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user enable 2&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
The server can now be controlled by this user as described in Using ipmitool for Remote Control of Servers.&lt;br /&gt;
&lt;br /&gt;
=== Users at the USER Privilege Level ===&lt;br /&gt;
If a user should only be used for querying sensor data, a custom privilege level can be setup for that. This user then has no rights for activating or deactivating the server, for example. A user named monitor will be created for this in the following example:&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set name 3 monitor&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user set password 3&lt;br /&gt;
 Password for user 3: &lt;br /&gt;
 Password for user 3: &lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel setaccess 1 3 link=on ipmi=on callin=on privilege=2&lt;br /&gt;
 [root@sr2500 ~]# ipmitool user enable 3&lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel getaccess 1 3&lt;br /&gt;
 Maximum User IDs     : 15&lt;br /&gt;
 Enabled User IDs     : 2&lt;br /&gt;
 &lt;br /&gt;
 User ID              : 3&lt;br /&gt;
 User Name            : monitor&lt;br /&gt;
 Fixed Name           : No&lt;br /&gt;
 Access Available     : call-in / callback&lt;br /&gt;
 Link Authentication  : enabled&lt;br /&gt;
 IPMI Messaging       : enabled&lt;br /&gt;
 Privilege Level      : USER&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
The importance of the various privilege numbers will be displayed when &amp;lt;code&amp;gt;ipmitool channel&amp;lt;/code&amp;gt; is called without any additional parameters:&lt;br /&gt;
 [root@sr2500 ~]# ipmitool channel&lt;br /&gt;
 Channel Commands: authcap   &amp;lt;channel number&amp;gt; &amp;lt;max privilege&amp;gt;&lt;br /&gt;
                   getaccess &amp;lt;channel number&amp;gt; [user id]&lt;br /&gt;
                   setaccess &amp;lt;channel number&amp;gt; &amp;lt;user id&amp;gt; [callin=on|off] [ipmi=on|off] [link=on|off] [privilege=level]&lt;br /&gt;
                   info      [channel number]&lt;br /&gt;
                   getciphers &amp;lt;ipmi | sol&amp;gt; [channel]&lt;br /&gt;
 &lt;br /&gt;
 Possible privilege levels are:&lt;br /&gt;
    1   Callback level&lt;br /&gt;
    2   User level&lt;br /&gt;
    3   Operator level&lt;br /&gt;
    4   Administrator level&lt;br /&gt;
    5   OEM Proprietary level&lt;br /&gt;
   15   No access&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
The user just created (named 'monitor') has been assigned the USER privilege level. So that LAN access is allowed for this user, you must activate MD5 authentication for LAN access for this user group (USER privilege level):&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan set 1 auth USER MD5&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
MD5 will now also be listed as User Auth Type Enable for LAN Channel 1:&lt;br /&gt;
 [root@sr2500 ~]# ipmitool lan print 1&lt;br /&gt;
 Set in Progress         : Set Complete&lt;br /&gt;
 Auth Type Support       : NONE MD5 PASSWORD &lt;br /&gt;
 Auth Type Enable        : Callback : &lt;br /&gt;
                         : User     : MD5 &lt;br /&gt;
                         : Operator : &lt;br /&gt;
                         : Admin    : MD5 &lt;br /&gt;
                         : OEM      : &lt;br /&gt;
 IP Address Source       : Static Address&lt;br /&gt;
 IP Address              : 192.168.1.211&lt;br /&gt;
 Subnet Mask             : 255.255.255.0&lt;br /&gt;
 MAC Address             : 00:0e:0c:ea:92:a2&lt;br /&gt;
 SNMP Community String   : &lt;br /&gt;
 IP Header               : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10&lt;br /&gt;
 BMC ARP Control         : ARP Responses Enabled, Gratuitous ARP Disabled&lt;br /&gt;
 Gratituous ARP Intrvl   : 2.0 seconds&lt;br /&gt;
 Default Gateway IP      : 192.168.1.254&lt;br /&gt;
 Default Gateway MAC     : 00:0e:0c:aa:8e:13&lt;br /&gt;
 Backup Gateway IP       : 0.0.0.0&lt;br /&gt;
 Backup Gateway MAC      : 00:00:00:00:00:00&lt;br /&gt;
 RMCP+ Cipher Suites     : 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14&lt;br /&gt;
 Cipher Suite Priv Max   : XXXXXXXXXXXXXXX&lt;br /&gt;
                         :     X=Cipher Suite Unused&lt;br /&gt;
                         :     c=CALLBACK&lt;br /&gt;
                         :     u=USER&lt;br /&gt;
                         :     o=OPERATOR&lt;br /&gt;
                         :     a=ADMIN&lt;br /&gt;
                         :     O=OEM&lt;br /&gt;
 [root@sr2500 ~]# &lt;br /&gt;
'''Please specify the option &amp;quot;-L USER&amp;quot; for ipmitool when using a user with USER privilege.''' Otherwise you will get an error message stating:&lt;br /&gt;
 Activate Session error: Requested privilege level exceeds limit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Attribution: https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=942</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=942"/>
		<updated>2023-11-27T02:38:42Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: add ipmitool&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping ipmitool&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_an_initrd&amp;diff=941</id>
		<title>How to unpack and pack an initrd</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_an_initrd&amp;diff=941"/>
		<updated>2023-11-26T22:47:16Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:tutorial]]&lt;br /&gt;
&lt;br /&gt;
Breif instructions to unpack and repack an image as an initrd;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Unpack an initrd'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;unmkinitramfs initrd .&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Pack an initrd'''&lt;br /&gt;
&lt;br /&gt;
cd initrd&lt;br /&gt;
&lt;br /&gt;
find . | cpio -o -H newc | gzip -9 &amp;gt; ../KERNEL_NAME.img&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_an_initrd&amp;diff=940</id>
		<title>How to unpack and pack an initrd</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_an_initrd&amp;diff=940"/>
		<updated>2023-11-26T22:46:49Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding initrd&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Breif instructions to unpack and repack an image as an initrd;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Unpack an initrd'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;unmkinitramfs initrd .&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Pack an initrd'''&lt;br /&gt;
&lt;br /&gt;
cd initrd&lt;br /&gt;
&lt;br /&gt;
find . | cpio -o -H newc | gzip -9 &amp;gt; ../KERNEL_NAME.img&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_a_debian_archive&amp;diff=939</id>
		<title>How to unpack and pack a debian archive</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_a_debian_archive&amp;diff=939"/>
		<updated>2023-11-26T22:33:02Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:tutorial]]&lt;br /&gt;
&lt;br /&gt;
The primary command to manipulate deb packages is &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To unpack the package, create an empty directory and switch to it, then run &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt; to extract its control information and the package files. Use &amp;lt;code&amp;gt;dpkg-deb -b&amp;lt;/code&amp;gt; to rebuild the package.&lt;br /&gt;
 &amp;lt;code&amp;gt;mkdir tmp&lt;br /&gt;
 dpkg-deb -R original.deb tmp&lt;br /&gt;
 # edit DEBIAN/postinst&lt;br /&gt;
 dpkg-deb -b tmp fixed.deb&amp;lt;/code&amp;gt;&lt;br /&gt;
Beware that unless your script is running as root, the files' permissions and ownership will be corrupted at the extraction stage. One way to avoid this is to run your script under &amp;lt;code&amp;gt;fakeroot&amp;lt;/code&amp;gt;. Note that you need to run the whole sequence under &amp;lt;code&amp;gt;fakeroot&amp;lt;/code&amp;gt;, not each &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt; individually, since it's the &amp;lt;code&amp;gt;fakeroot&amp;lt;/code&amp;gt; process that keeps the memory of the permissions of the files that can't be created as they are.&lt;br /&gt;
 &amp;lt;code&amp;gt;fakeroot sh -c '&lt;br /&gt;
   mkdir tmp&lt;br /&gt;
   dpkg-deb -R original.deb tmp&lt;br /&gt;
   # edit DEBIAN/postinst&lt;br /&gt;
   dpkg-deb -b tmp fixed.deb&lt;br /&gt;
 '&amp;lt;/code&amp;gt;&lt;br /&gt;
Rather than mess with permissions, you can keep the data archive intact and modify only the control archive. &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt; doesn't provide a way to do that. Fortunately, deb packges are in a standard format: they're &amp;lt;code&amp;gt;ar&amp;lt;/code&amp;gt; archives. So you can use &amp;lt;code&amp;gt;ar&amp;lt;/code&amp;gt; to extract the control archive, modify its files, and use &amp;lt;code&amp;gt;ar&amp;lt;/code&amp;gt; again to replace the control archive by a new version.&lt;br /&gt;
 &amp;lt;code&amp;gt;mkdir tmp&lt;br /&gt;
 cd tmp&lt;br /&gt;
 ar p ../original.deb control.tar.gz | tar -xz&lt;br /&gt;
 # edit postinst&lt;br /&gt;
 cp ../original.deb ../fixed.deb&lt;br /&gt;
 tar czf control.tar.gz *[!z]&lt;br /&gt;
 ar r ../fixed.deb control.tar.gz&amp;lt;/code&amp;gt;&lt;br /&gt;
You should '''add a changelog entry and change the version number''' if you modify anything in the package. The infrastructure to manipulate Debian packages assumes that if two packages have the same name and version, they're the same package. Add a suffix to the ''debian_revision'' part at the end of the version number; for sorting reasons the suffix should start with &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;1.2.3-4.1&amp;lt;/code&amp;gt; becomes &amp;lt;code&amp;gt;1.2.3-4.1~johnjumper1&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Instead of using shell tools, you can use Emacs. The &amp;lt;code&amp;gt;dpkg-dev-el&amp;lt;/code&amp;gt; package (which is its own upstream as this is a native Debian package) contains modes to edit &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; files and to edit Debian changelogs. Emacs can be used interactively or scripted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Attribution: https://unix.stackexchange.com/questions/138188/easily-unpack-deb-edit-postinst-and-repack-deb&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_a_debian_archive&amp;diff=938</id>
		<title>How to unpack and pack a debian archive</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_unpack_and_pack_a_debian_archive&amp;diff=938"/>
		<updated>2023-11-26T22:31:57Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding how to unpack and pack a debian archive&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
The primary command to manipulate deb packages is &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To unpack the package, create an empty directory and switch to it, then run &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt; to extract its control information and the package files. Use &amp;lt;code&amp;gt;dpkg-deb -b&amp;lt;/code&amp;gt; to rebuild the package.&lt;br /&gt;
 &amp;lt;code&amp;gt;mkdir tmp&lt;br /&gt;
 dpkg-deb -R original.deb tmp&lt;br /&gt;
 # edit DEBIAN/postinst&lt;br /&gt;
 dpkg-deb -b tmp fixed.deb&amp;lt;/code&amp;gt;&lt;br /&gt;
Beware that unless your script is running as root, the files' permissions and ownership will be corrupted at the extraction stage. One way to avoid this is to run your script under &amp;lt;code&amp;gt;fakeroot&amp;lt;/code&amp;gt;. Note that you need to run the whole sequence under &amp;lt;code&amp;gt;fakeroot&amp;lt;/code&amp;gt;, not each &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt; individually, since it's the &amp;lt;code&amp;gt;fakeroot&amp;lt;/code&amp;gt; process that keeps the memory of the permissions of the files that can't be created as they are.&lt;br /&gt;
 &amp;lt;code&amp;gt;fakeroot sh -c '&lt;br /&gt;
   mkdir tmp&lt;br /&gt;
   dpkg-deb -R original.deb tmp&lt;br /&gt;
   # edit DEBIAN/postinst&lt;br /&gt;
   dpkg-deb -b tmp fixed.deb&lt;br /&gt;
 '&amp;lt;/code&amp;gt;&lt;br /&gt;
Rather than mess with permissions, you can keep the data archive intact and modify only the control archive. &amp;lt;code&amp;gt;dpkg-deb&amp;lt;/code&amp;gt; doesn't provide a way to do that. Fortunately, deb packges are in a standard format: they're &amp;lt;code&amp;gt;ar&amp;lt;/code&amp;gt; archives. So you can use &amp;lt;code&amp;gt;ar&amp;lt;/code&amp;gt; to extract the control archive, modify its files, and use &amp;lt;code&amp;gt;ar&amp;lt;/code&amp;gt; again to replace the control archive by a new version.&lt;br /&gt;
 &amp;lt;code&amp;gt;mkdir tmp&lt;br /&gt;
 cd tmp&lt;br /&gt;
 ar p ../original.deb control.tar.gz | tar -xz&lt;br /&gt;
 # edit postinst&lt;br /&gt;
 cp ../original.deb ../fixed.deb&lt;br /&gt;
 tar czf control.tar.gz *[!z]&lt;br /&gt;
 ar r ../fixed.deb control.tar.gz&amp;lt;/code&amp;gt;&lt;br /&gt;
You should '''add a changelog entry and change the version number''' if you modify anything in the package. The infrastructure to manipulate Debian packages assumes that if two packages have the same name and version, they're the same package. Add a suffix to the ''debian_revision'' part at the end of the version number; for sorting reasons the suffix should start with &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;1.2.3-4.1&amp;lt;/code&amp;gt; becomes &amp;lt;code&amp;gt;1.2.3-4.1~johnjumper1&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Instead of using shell tools, you can use Emacs. The &amp;lt;code&amp;gt;dpkg-dev-el&amp;lt;/code&amp;gt; package (which is its own upstream as this is a native Debian package) contains modes to edit &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; files and to edit Debian changelogs. Emacs can be used interactively or scripted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Attribution: https://unix.stackexchange.com/questions/138188/easily-unpack-deb-edit-postinst-and-repack-deb&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=937</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=937"/>
		<updated>2023-11-25T22:06:18Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: disable postfix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket postfix --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=936</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=936"/>
		<updated>2023-11-25T09:35:45Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: disable pvescheduler&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed pvescheduler.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable nfs-blkmap iscsid.socket  --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=935</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=935"/>
		<updated>2023-11-25T09:33:53Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: disabling some extra services&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsi dropbear nfs-ganesha-lock nvmefc-boot-connections nvmf-autoconnect zfs-zed --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Install_openQRM_on_Debian&amp;diff=934</id>
		<title>Install openQRM on Debian</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Install_openQRM_on_Debian&amp;diff=934"/>
		<updated>2023-11-24T22:42:59Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding note to add extra kernel&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This How-To explains installing the openQRM Datacentre Management and Cloud Computing platform on Debian. It is the starting point for a set of openQRM How-Tos explaining different Use-cases with the focus on virtualisation, automation and cloud computing.&lt;br /&gt;
&lt;br /&gt;
'''Requirements'''&lt;br /&gt;
&lt;br /&gt;
* One physical Server. Alternatively, the installation can be also done within a Virtual Machine&lt;br /&gt;
* At least 1 GB of Memory&lt;br /&gt;
* at least 40 GB of Diskspace&lt;br /&gt;
* Optional VT for Intel CPUs or AMD-V for AMD CPUs (Virtualization Technology) enabled in the Systems BIOS so that the openQRM Server can run Virtual Machines later&lt;br /&gt;
&lt;br /&gt;
=== Install Debian  ===&lt;br /&gt;
&lt;br /&gt;
# Install a minimal Debian on a physical Server. During the installation select 'manual network' configuration and provide a static IP address. In this tutorial we will use 192.168.178.5/255.255.255.0 as the IP configuration for the openQRM Server system.&lt;br /&gt;
# Remember to use/set a Fully Qualified Domain Name (FQDN) as the sytem's hostname and domain. It does not need to resolve, but its important to be set.&lt;br /&gt;
# In the partitioning setup, select 'manual' and create one partition for the root-filesystem, one as swap space plus a dedicated partition to be used as storage space for the Virtual Machines later. In the configuration of the dedicated storage partition select 'do not use'.&lt;br /&gt;
# In the software selection dialog select just 'SSH-Server'&lt;br /&gt;
# After the installation finished please login to the system and update its packaging system as 'root':&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;''apt-get update &amp;amp;&amp;amp; apt-get upgrade'' &amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Install openQRM - the short version ===&lt;br /&gt;
1. download and un-compress openQRM Community or Enterprise to /usr/src&lt;br /&gt;
&lt;br /&gt;
2. un tar and install ;&lt;br /&gt;
&lt;br /&gt;
Community;&lt;br /&gt;
&lt;br /&gt;
''tar -zxf openQRM-5.3.50-Community-Edition.tgz ; cd openQRM-5.3.50-Community-Edition/ ; ./install-openqrm.sh''&lt;br /&gt;
&lt;br /&gt;
Enterprise;&lt;br /&gt;
&lt;br /&gt;
''tar -zxf openQRM-5.3.50-Enterprise-Edition.tgz ; cd openQRM-5.3.50-Enterprise-Edition/ ; ./install-openqrm.sh''&lt;br /&gt;
&lt;br /&gt;
This process can take a short while, whilst it installs the supporting openQRM packages. Minimum 10 minutes if you have fast internet and a decent kvm.&lt;br /&gt;
&lt;br /&gt;
4. suggestion: add PVE kernel to KVM first, if used in conjunction with a Proxmox VE installation,&lt;br /&gt;
&lt;br /&gt;
wget &amp;lt;nowiki&amp;gt;http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-7_amd64.deb&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. create mysql openqrm user and password, flush privileges&lt;br /&gt;
&lt;br /&gt;
mysql -e &amp;quot;grant all on openqrm.* to 'openqrm'@'localhost' identified by 'openqrm'; flush privileges&amp;quot;&lt;br /&gt;
&lt;br /&gt;
6. reboot, this is useful to the current KVM kernel (now the pve kernel) will be used as the default linux kernel.&lt;br /&gt;
&lt;br /&gt;
7. then configure openqrm via web (remember the steps above the username and password are both openqrm), the last screen will take 5-10 minutes whilst openQRM rebuilds the current kernel's initrd into an openQRM compabille boot&lt;br /&gt;
&lt;br /&gt;
=== Install openQRM - the longer version ===&lt;br /&gt;
Purchase and download openQRM&lt;br /&gt;
&lt;br /&gt;
openQRM is available from openQRM Enterprise at&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;http://www.openqrm-enterprise.com/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use the instructions below to install openQRM from the source repository or by packages.&lt;br /&gt;
&lt;br /&gt;
The installation procedure for openQRM is straight forward.&lt;br /&gt;
&lt;br /&gt;
# Unpack the openqrm-enterprise.tar.gz file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;''tar -xvzf openqrm-enterprise.tar.gz''&amp;lt;/blockquote&amp;gt;2. Then run as 'root'&amp;lt;blockquote&amp;gt;''cd openqrm-enterprise''&lt;br /&gt;
&lt;br /&gt;
''./install-openqrm.sh''&amp;lt;/blockquote&amp;gt;Make sure to set a password for the mysql-server and nagios4 package.&lt;br /&gt;
[[File:Csm 02-openqrm-install e007a58550.png|none|thumb|390x390px|Setting a password]]&lt;br /&gt;
The installation also asks for the mail-configuration. If unsure please select &amp;quot;local only&amp;quot; and go on with the suggested system name.&lt;br /&gt;
[[File:Csm 04-openqrm-install b8db1a2840.png|none|thumb|390x390px|Mail-Configuration]]&lt;br /&gt;
The last step of the installation provides you with the URL, username and password to login to the openQRM Server&lt;br /&gt;
[[File:Csm 06-openqrm-install fa5cc24876.png|none|thumb|390x390px|openQRM login credentials]]&lt;br /&gt;
&lt;br /&gt;
=== Installation by packages ===&lt;br /&gt;
To install openQRM by distribution packages please request the package installation from openQRM Enterprise&lt;br /&gt;
&lt;br /&gt;
=== Configure and initialize openQRM ===&lt;br /&gt;
After a successful installation the openQRM Server web interface is available at&amp;lt;blockquote&amp;gt;''&amp;lt;nowiki&amp;gt;http://static-ip-configured-during-the-Debian-installation/openqrm&amp;lt;/nowiki&amp;gt;''&amp;lt;/blockquote&amp;gt;If you have set the suggested IP address for this howto the openQRM URL will be&amp;lt;blockquote&amp;gt;''&amp;lt;nowiki&amp;gt;http://192.168.178.5/openqrm&amp;lt;/nowiki&amp;gt;''&amp;lt;/blockquote&amp;gt;Open this URL in your Web browser. Login with the username 'openqrm' and the password 'openqrm'. Then select the network interface to use for the openQRM management network&lt;br /&gt;
[[File:Csm 07-openqrm-install 602e7d0524.png|none|thumb|390x390px|Network Interface selection screen]]Then select 'mysql' as the database type&lt;br /&gt;
[[File:Csm 08-openqrm-install 5a821c5713.png|none|thumb|390x390px|Database type selection]]&lt;br /&gt;
At the next step, provide the database credentials. Ensure the database credentials are valid before proceeding. Please note this stage can take up to 8 minutes. The screen will be blank, please be patient. You can always;&lt;br /&gt;
&lt;br /&gt;
tail -n 100 -f /var/log/syslog&lt;br /&gt;
&lt;br /&gt;
for activity.&lt;br /&gt;
[[File:Csm 09-openqrm-install 10b3d582f9.png|none|thumb|390x390px|Database configuration]]&lt;br /&gt;
For the openQRM Enterprise Edition, the following page provides an simple option to upload the licence keys&lt;br /&gt;
[[File:Csm 10-openqrm-install 3a06f002ca.png|none|thumb|390x390px|Upload licence keys]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the license keys are provided openQRM will rebuild the current kernel into an openQRM initramdisk, this may take 5-10 minutes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Congratulations!!'''&lt;br /&gt;
&lt;br /&gt;
openQRM is now installed and successfully initialized ready to manage all aspects of your datacentre&lt;br /&gt;
&lt;br /&gt;
[[File:Csm 11-openqrm-install 8cdf50bf05.png|frameless|390x390px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note Well:&lt;br /&gt;
&lt;br /&gt;
It is always wise to install a duplicate kernel (this process can take 5-10mins or more on slower computers);&lt;br /&gt;
&lt;br /&gt;
To add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables&lt;br /&gt;
&lt;br /&gt;
# /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
# /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;br /&gt;
[[Category:Howto]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Install_openQRM_on_Debian&amp;diff=933</id>
		<title>Install openQRM on Debian</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Install_openQRM_on_Debian&amp;diff=933"/>
		<updated>2023-11-24T08:13:34Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: db pre-existing and time takes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This How-To explains installing the openQRM Datacentre Management and Cloud Computing platform on Debian. It is the starting point for a set of openQRM How-Tos explaining different Use-cases with the focus on virtualisation, automation and cloud computing.&lt;br /&gt;
&lt;br /&gt;
'''Requirements'''&lt;br /&gt;
&lt;br /&gt;
* One physical Server. Alternatively, the installation can be also done within a Virtual Machine&lt;br /&gt;
* At least 1 GB of Memory&lt;br /&gt;
* at least 40 GB of Diskspace&lt;br /&gt;
* Optional VT for Intel CPUs or AMD-V for AMD CPUs (Virtualization Technology) enabled in the Systems BIOS so that the openQRM Server can run Virtual Machines later&lt;br /&gt;
&lt;br /&gt;
=== Install Debian  ===&lt;br /&gt;
&lt;br /&gt;
# Install a minimal Debian on a physical Server. During the installation select 'manual network' configuration and provide a static IP address. In this tutorial we will use 192.168.178.5/255.255.255.0 as the IP configuration for the openQRM Server system.&lt;br /&gt;
# Remember to use/set a Fully Qualified Domain Name (FQDN) as the sytem's hostname and domain. It does not need to resolve, but its important to be set.&lt;br /&gt;
# In the partitioning setup, select 'manual' and create one partition for the root-filesystem, one as swap space plus a dedicated partition to be used as storage space for the Virtual Machines later. In the configuration of the dedicated storage partition select 'do not use'.&lt;br /&gt;
# In the software selection dialog select just 'SSH-Server'&lt;br /&gt;
# After the installation finished please login to the system and update its packaging system as 'root':&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;''apt-get update &amp;amp;&amp;amp; apt-get upgrade'' &amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Install openQRM - the short version ===&lt;br /&gt;
1. download and un-compress openQRM Community or Enterprise to /usr/src&lt;br /&gt;
&lt;br /&gt;
2. un tar and install ;&lt;br /&gt;
&lt;br /&gt;
Community;&lt;br /&gt;
&lt;br /&gt;
''tar -zxf openQRM-5.3.50-Community-Edition.tgz ; cd openQRM-5.3.50-Community-Edition/ ; ./install-openqrm.sh''&lt;br /&gt;
&lt;br /&gt;
Enterprise;&lt;br /&gt;
&lt;br /&gt;
''tar -zxf openQRM-5.3.50-Enterprise-Edition.tgz ; cd openQRM-5.3.50-Enterprise-Edition/ ; ./install-openqrm.sh''&lt;br /&gt;
&lt;br /&gt;
This process can take a short while, whilst it installs the supporting openQRM packages. Minimum 10 minutes if you have fast internet and a decent kvm.&lt;br /&gt;
&lt;br /&gt;
4. suggestion: add PVE kernel to KVM first, if used in conjunction with a Proxmox VE installation,&lt;br /&gt;
&lt;br /&gt;
wget &amp;lt;nowiki&amp;gt;http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-7_amd64.deb&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. create mysql openqrm user and password, flush privileges&lt;br /&gt;
&lt;br /&gt;
mysql -e &amp;quot;grant all on openqrm.* to 'openqrm'@'localhost' identified by 'openqrm'; flush privileges&amp;quot;&lt;br /&gt;
&lt;br /&gt;
6. reboot, this is useful to the current KVM kernel (now the pve kernel) will be used as the default linux kernel.&lt;br /&gt;
&lt;br /&gt;
7. then configure openqrm via web (remember the steps above the username and password are both openqrm), the last screen will take 5-10 minutes whilst openQRM rebuilds the current kernel's initrd into an openQRM compabille boot&lt;br /&gt;
&lt;br /&gt;
=== Install openQRM - the longer version ===&lt;br /&gt;
Purchase and download openQRM&lt;br /&gt;
&lt;br /&gt;
openQRM is available from openQRM Enterprise at&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;http://www.openqrm-enterprise.com/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use the instructions below to install openQRM from the source repository or by packages.&lt;br /&gt;
&lt;br /&gt;
The installation procedure for openQRM is straight forward.&lt;br /&gt;
&lt;br /&gt;
# Unpack the openqrm-enterprise.tar.gz file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;''tar -xvzf openqrm-enterprise.tar.gz''&amp;lt;/blockquote&amp;gt;2. Then run as 'root'&amp;lt;blockquote&amp;gt;''cd openqrm-enterprise''&lt;br /&gt;
&lt;br /&gt;
''./install-openqrm.sh''&amp;lt;/blockquote&amp;gt;Make sure to set a password for the mysql-server and nagios4 package.&lt;br /&gt;
[[File:Csm 02-openqrm-install e007a58550.png|none|thumb|390x390px|Setting a password]]&lt;br /&gt;
The installation also asks for the mail-configuration. If unsure please select &amp;quot;local only&amp;quot; and go on with the suggested system name.&lt;br /&gt;
[[File:Csm 04-openqrm-install b8db1a2840.png|none|thumb|390x390px|Mail-Configuration]]&lt;br /&gt;
The last step of the installation provides you with the URL, username and password to login to the openQRM Server&lt;br /&gt;
[[File:Csm 06-openqrm-install fa5cc24876.png|none|thumb|390x390px|openQRM login credentials]]&lt;br /&gt;
&lt;br /&gt;
=== Installation by packages ===&lt;br /&gt;
To install openQRM by distribution packages please request the package installation from openQRM Enterprise&lt;br /&gt;
&lt;br /&gt;
=== Configure and initialize openQRM ===&lt;br /&gt;
After a successful installation the openQRM Server web interface is available at&amp;lt;blockquote&amp;gt;''&amp;lt;nowiki&amp;gt;http://static-ip-configured-during-the-Debian-installation/openqrm&amp;lt;/nowiki&amp;gt;''&amp;lt;/blockquote&amp;gt;If you have set the suggested IP address for this howto the openQRM URL will be&amp;lt;blockquote&amp;gt;''&amp;lt;nowiki&amp;gt;http://192.168.178.5/openqrm&amp;lt;/nowiki&amp;gt;''&amp;lt;/blockquote&amp;gt;Open this URL in your Web browser. Login with the username 'openqrm' and the password 'openqrm'. Then select the network interface to use for the openQRM management network&lt;br /&gt;
[[File:Csm 07-openqrm-install 602e7d0524.png|none|thumb|390x390px|Network Interface selection screen]]Then select 'mysql' as the database type&lt;br /&gt;
[[File:Csm 08-openqrm-install 5a821c5713.png|none|thumb|390x390px|Database type selection]]&lt;br /&gt;
At the next step, provide the database credentials. Ensure the database credentials are valid before proceeding. Please note this stage can take up to 8 minutes. The screen will be blank, please be patient. You can always;&lt;br /&gt;
&lt;br /&gt;
tail -n 100 -f /var/log/syslog&lt;br /&gt;
&lt;br /&gt;
for activity.&lt;br /&gt;
[[File:Csm 09-openqrm-install 10b3d582f9.png|none|thumb|390x390px|Database configuration]]&lt;br /&gt;
For the openQRM Enterprise Edition, the following page provides an simple option to upload the licence keys&lt;br /&gt;
[[File:Csm 10-openqrm-install 3a06f002ca.png|none|thumb|390x390px|Upload licence keys]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the license keys are provided openQRM will rebuild the current kernel into an openQRM initramdisk, this may take 5-10 minutes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Congratulations!!'''&lt;br /&gt;
&lt;br /&gt;
openQRM is now installed and successfully initialized ready to manage all aspects of your datacentre&lt;br /&gt;
&lt;br /&gt;
[[File:Csm 11-openqrm-install 8cdf50bf05.png|frameless|390x390px]]&lt;br /&gt;
&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;br /&gt;
[[Category:Howto]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=932</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=932"/>
		<updated>2023-11-20T04:42:20Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding path to openqrm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## /usr/share/openqrm/bin/openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=931</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=931"/>
		<updated>2023-11-15T23:57:51Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding packages dropbear iputils-ping&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl dropbear iputils-ping&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=930</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=930"/>
		<updated>2023-11-14T02:32:49Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding package libpve-network-perl&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl libpve-network-perl &lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=929</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=929"/>
		<updated>2023-11-14T02:15:20Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding extra packages&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd ifupdown2 dnsutils ethtool curl unzip screen iftop lshw smartmontools nvme-cli lsscsi sysstat htop mc rpl&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=928</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=928"/>
		<updated>2023-11-14T02:09:28Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: adding xinetd&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo xinetd&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=Debian_12_how_to_bootstrap_an_image&amp;diff=927</id>
		<title>Debian 12 how to bootstrap an image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=Debian_12_how_to_bootstrap_an_image&amp;diff=927"/>
		<updated>2023-11-13T22:07:45Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: added create image tarball&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Debian]] [[Category:Tutorial]]&lt;br /&gt;
&lt;br /&gt;
'''Starting from a fresh Debian 12 installation do the following;'''&lt;br /&gt;
&lt;br /&gt;
apt-get install debootstrap&lt;br /&gt;
&lt;br /&gt;
export MY_CHROOT=/exports/debian12&lt;br /&gt;
&lt;br /&gt;
mkdir -p $MY_CHROOT/dev/pts $MY_CHROOT/proc $MY_CHROOT/var/run&lt;br /&gt;
&lt;br /&gt;
cd $MY_CHROOT/..&lt;br /&gt;
&lt;br /&gt;
mount --bind /dev/ $MY_CHROOT/dev/&lt;br /&gt;
&lt;br /&gt;
mount --bind /dev/pts $MY_CHROOT/dev/pts&lt;br /&gt;
&lt;br /&gt;
mount --bind /proc $MY_CHROOT/proc&lt;br /&gt;
&lt;br /&gt;
debootstrap --arch amd64 bookworm $MY_CHROOT/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cp /etc/passwd $MY_CHROOT/etc/&lt;br /&gt;
&lt;br /&gt;
cp /etc/shadow $MY_CHROOT/etc/&lt;br /&gt;
&lt;br /&gt;
cp /etc/group $MY_CHROOT/etc/&lt;br /&gt;
&lt;br /&gt;
cp /etc/apt/sources.list $MY_CHROOT/etc/apt/&lt;br /&gt;
&lt;br /&gt;
cp /usr/share/keyrings/*gpg $MY_CHROOT/etc/apt/trusted.gpg.d/&lt;br /&gt;
&lt;br /&gt;
chroot $MY_CHROOT&lt;br /&gt;
&lt;br /&gt;
apt-get update&lt;br /&gt;
&lt;br /&gt;
apt-get install wget net-tools screen locales tzdata collectd telnet whois traceroute nfs-kernel-server jq bash dialog iptables&lt;br /&gt;
&lt;br /&gt;
dpkg-reconfigure locales&lt;br /&gt;
&lt;br /&gt;
dpkg-reconfigure tzdata&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
umount $MY_CHROOT/dev/pts&lt;br /&gt;
&lt;br /&gt;
umount $MY_CHROOT/dev&lt;br /&gt;
&lt;br /&gt;
umount $MY_CHROOT/proc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Create a tarball for this image;'''&lt;br /&gt;
&lt;br /&gt;
export TEMPLATE_NAME=&amp;quot;debian12&amp;quot;&lt;br /&gt;
&lt;br /&gt;
cd $MY_CHROOT&lt;br /&gt;
&lt;br /&gt;
tar --exclude=etc/hostname --exclude=var/openqrm/openqrm.conf --exclude=root/.*history --exclude=root/.joe_state --exclude=root/.ssh/* --exclude=etc/ssh/ssh_host_* --exclude=etc/dropbear/*key --exclude=etc/dropbear-initramfs/*key --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/share/pve-edk2-firmware --exclude=usr/share/man --numeric-owner -czf ../$TEMPLATE_NAME.new.tgz .&lt;br /&gt;
&lt;br /&gt;
cd ../&lt;br /&gt;
&lt;br /&gt;
ls -la&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=926</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=926"/>
		<updated>2023-11-13T22:03:11Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: added ceph install pacakges and additional supporting packages&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash rsyslog portmap open-iscsi rsync sudo&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
## To install ceph support, add the relavent repository and add packages;&lt;br /&gt;
### apt-get install ceph ceph-common ceph-fuse ceph-mds ceph-volume gdisk nvme-cli&lt;br /&gt;
## To add FRRouting add the relavent repository and add packages;&lt;br /&gt;
### apt-get install frr frr-pythontools&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=925</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=925"/>
		<updated>2023-11-13T06:19:27Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: comment out make-rprivate&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## #mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
	<entry>
		<id>https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=924</id>
		<title>How to build Proxmox tmpfs image</title>
		<link rel="alternate" type="text/html" href="https://wiki.openqrm-enterprise.com/index.php?title=How_to_build_Proxmox_tmpfs_image&amp;diff=924"/>
		<updated>2023-11-13T06:18:20Z</updated>

		<summary type="html">&lt;p&gt;Stvsyf: change order of mounting dev/pts&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Follow the steps below to convert Proxmox VE to a pxe booted tmpfs memory resident operating system. &lt;br /&gt;
&lt;br /&gt;
Once you have a running openQRM Server you can follow these steps.&lt;br /&gt;
&lt;br /&gt;
This process is supported in both the community and enterprise versions of openQRM.&lt;br /&gt;
&lt;br /&gt;
You will need the following plugins enabled; dhcpd, tftp, nfs-storage, tmpfs-storage, atu (optional, available in the enterprise package)&lt;br /&gt;
&lt;br /&gt;
Pre-built Proxmox VE templates are available for download in the customer portal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Why is this solution so exciting ?'''&lt;br /&gt;
&lt;br /&gt;
When data centre operators deploy compute nodes, they no longer need network or attached storage to run that node. This solution allows a compute node to pxe network boot an operating system into a ram disk. This ram disk is essentially the local storage for the server. Being memory resident the system ram is exceptionally fast, several times faster in order of magnitude than NVMe. So if the node lost network connectivity it would still be able to function as the node would have already been booted and running just like it had local attached storage.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hold on this is too good to be true, what are the down sides ?'''&lt;br /&gt;
&lt;br /&gt;
Well its memory resident, so if power is lost the local configuration would be lost. However if the node is part of a cluster then the cluster would hold the PVE configuration and if using the ATU plugin is used the configuration would be synchronised and retained on the openQRM server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Requirements:'''&lt;br /&gt;
* openQRM Community or Enterprise (a KVM is the suggested option)&lt;br /&gt;
* optional: openQRM ATU Plugin for advanced server and cluster configuration and boot management&lt;br /&gt;
* CPU 64bit Intel EMT64 or AMD64&lt;br /&gt;
* PCI(e) passthrough requires VT-d/AMD-d CPU flag support&lt;br /&gt;
* Debian 11 Bullseye&lt;br /&gt;
'''Suggest minimum specification for:'''&lt;br /&gt;
* openQRM Server: 1GB &amp;amp; 1 CPU&lt;br /&gt;
* Virtual or Hardware Node (booted via tmpfs) 6-8GB. 4GB for tmpfs and 2-4GB for OS and Services.&lt;br /&gt;
* The clustering requires co-ordinated initialisation and configuration backup. The ATU Plugin orchestrates these steps for cluster management and configuration backup.&lt;br /&gt;
'''What is the ATU plugin ?'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is available in openQRM Enterprise. It allows the configuration synchronisation of the server to be maintain during reboots and power loss events. The ATU plugin is open source and written in bash and allows the start up sequence to be controlled and important configuration and service start sequences especially important for Proxmox VE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;Let's Start:&amp;lt;/big&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''1. Adding a Proxmox Kernel to openQRM:'''&lt;br /&gt;
# Download PVE Kernel (check to see if there is a newer kernel) - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb&lt;br /&gt;
# Install Kernel locally&lt;br /&gt;
# then add the Kernel to openQRM. Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables)&lt;br /&gt;
## openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs -m csiostor&lt;br /&gt;
## openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u openqrm -p openqrm -l / -i initramfs -m csiostor &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Creating Image suitable to TMPFS Boot:'''&lt;br /&gt;
# Create Image - To create an image for Proxmox VE (image will be named &amp;quot;proxmox_image&amp;quot;) which can be used as a tmpfs image, follow these steps;&lt;br /&gt;
## apt-get install debootstrap&lt;br /&gt;
## mkdir -p /exports/proxmox_image/dev/pts /exports/proxmox_image/proc /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## debootstrap --arch amd64 buster /exports/proxmox_image/ &amp;lt;nowiki&amp;gt;https://deb.debian.org/debian/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
## mount --bind /dev/ /exports/proxmox_image/dev/&lt;br /&gt;
## mount --bind /dev/pts /exports/proxmox_image/dev/pts&lt;br /&gt;
## mount --bind /proc /exports/proxmox_image/proc&lt;br /&gt;
## mount --make-rprivate /exports/proxmox_image/&lt;br /&gt;
## mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus&lt;br /&gt;
## chroot /exports/proxmox_image&lt;br /&gt;
## apt-get install wget net-tools screen locales collectd telnet whois traceroute nfs-kernel-server jq bash&lt;br /&gt;
## dpkg-reconfigure locales&lt;br /&gt;
## dpkg-reconfigure tzdata&lt;br /&gt;
## Follow steps (Start at &amp;quot;Install Proxmox VE&amp;quot;) @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye&lt;br /&gt;
### We do not need to install grub or any other boot loaders&lt;br /&gt;
##'''set root password; passwd'''&lt;br /&gt;
## (optional) implement noclear for getty/inittab;&lt;br /&gt;
### mkdir -p /etc/systemd/system/getty@tty1.service.d/&lt;br /&gt;
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
TTYVTDisallocate=no&lt;br /&gt;
#'''Remember: /etc/hosts needs a valid hostname with your ip address'''&lt;br /&gt;
## This is managed with the ATU plugin&lt;br /&gt;
# exit chroot, type exit&lt;br /&gt;
# umount binds;&lt;br /&gt;
## umount /exports/proxmox_image/dev/pts&lt;br /&gt;
## umount /exports/proxmox_image/dev&lt;br /&gt;
## umount /exports/proxmox_image/proc&lt;br /&gt;
## umount /exports/proxmox_image/var/run/dbus&lt;br /&gt;
# (optional) If using the ATU Plugin follow these steps;&lt;br /&gt;
## (if using the ATU plugin) For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;&lt;br /&gt;
### systemctl list-unit-files --root /exports/proxmox_image/  | grep -v disabled | grep enabled&lt;br /&gt;
## (if using the ATU plugin) These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So we then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;&lt;br /&gt;
### /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable  lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable pveproxy.service pvestatd.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable netdiag.service rsync.service console-setup.service --root /exports/proxmox_image/&lt;br /&gt;
### If you have ceph installed disable;&lt;br /&gt;
#### /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/&lt;br /&gt;
### If you have Ganesha installed for nfs;&lt;br /&gt;
#### /bin/systemctl disable nfs-ganesha.service nfs-ganesha-lock.service nfs-common.service --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable puppet  --root /exports/proxmox_image/&lt;br /&gt;
### /bin/systemctl disable zfs.target zfs-mount.service nfs-kernel-server --root /exports/proxmox_image/&lt;br /&gt;
## (if using the ATU plugin) disable services (some services may not exist): &lt;br /&gt;
### /bin/systemctl disable pvedaemon pve-proxy pve-manager pve-cluster cman corosync ceph pvestatd qemu-server rrdcached spiceproxy --root /exports/proxmox_image/&lt;br /&gt;
# Tar the Image;&lt;br /&gt;
## mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/&lt;br /&gt;
## cd /exports/proxmox_image&lt;br /&gt;
## tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .&lt;br /&gt;
# When taring the image above, there are other directories that are not required that can be excluded. We suggest the uncompressed image size to be 55-60% of the available tmpfs volume size allocated (4GB as below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''3. Configuring openQRM to support above template:'''&lt;br /&gt;
# Activate dhcpd plugin then the tftp plugin&lt;br /&gt;
# Activate NFS Storage (if not already done so)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage&lt;br /&gt;
## Add NFS Storage;&lt;br /&gt;
## name &amp;quot;openqrm-nfs&amp;quot;&lt;br /&gt;
## Deployment Type: &amp;quot;nfs-deployment&amp;quot;&lt;br /&gt;
# Add NFS Volume (this triggers tmpfs storage)&lt;br /&gt;
## Under Plugins -&amp;gt; Storage -&amp;gt; NFS-Storage -&amp;gt; Volume Admin -&amp;gt; Edit -&amp;gt; proxmox_image &amp;quot;ADD IMAGE&amp;quot;&lt;br /&gt;
# &amp;lt;s&amp;gt;restart openQRM server/vm in case of duplicate services started from chroot image initialisation&amp;lt;/s&amp;gt;&lt;br /&gt;
# Now create a TmpFs-Storage: Plugins -&amp;gt; Storage -&amp;gt; Tmpfs-storage -&amp;gt; Volume Admin -&amp;gt; New Storage&lt;br /&gt;
## Name: openqrm-tmpfs&lt;br /&gt;
## Deployment Type: tmpfs-storage&lt;br /&gt;
# Now Create an Image: Components -&amp;gt; Image  -&amp;gt; Add new Image -&amp;gt; Tmpfs-root deployment -&amp;gt; click edit on the &amp;quot;openqrm-tmpfs&amp;quot; -&amp;gt; Click &amp;quot;ADD NEW VOLUME&amp;quot;&lt;br /&gt;
## Name: pve7&lt;br /&gt;
## Size: 4 GB&lt;br /&gt;
## Description: proxmox ve 7&lt;br /&gt;
# Now network boot a new node either a KVM or Physical machine, you will need to link this resource to a server. A resource is a blank system/server and a Server is a configuration applied to a resource/system/server. So when a system has booted via dhcp/pxe then system will enter an &amp;quot;idle&amp;quot; state and will be selectable as &amp;quot;idle&amp;quot; for this next step.&lt;br /&gt;
## Click &amp;quot;ADD A NEW SERVER&amp;quot;&lt;br /&gt;
## Select the resource&lt;br /&gt;
## then select the image for server, select the pve7 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.)&lt;br /&gt;
## then click &amp;quot;Install from NAS/NFS&amp;quot; select the &amp;quot;proxmox_image&amp;quot; as above then click submit&lt;br /&gt;
## then select the kernel pve-5.11.22-6 then click submit&lt;br /&gt;
## Done&lt;br /&gt;
# You will then need to &amp;quot;start&amp;quot; the server, click &amp;quot;start&amp;quot;, the idle resource will then reboot and boot the image as created above&lt;br /&gt;
# Once booted you may need to restart sshd and pve-cluster&lt;br /&gt;
## systemctl restart ssh pve-cluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes/Customisations:'''&lt;br /&gt;
# Postfix may error a warning on boot, edit /etc/mailname&lt;br /&gt;
#'''&amp;lt;u&amp;gt;Nodes booted with out the ATU plugin will lose configuration upon reboot!&amp;lt;/u&amp;gt;'''&lt;br /&gt;
# when changing kernel versions, a stop and start of the server is required&lt;br /&gt;
&lt;br /&gt;
This technology preview displays the tmpfs memory resident capabilities to support Proxmox VE as a memory resident operating system.&lt;br /&gt;
&lt;br /&gt;
'''About the ATU Plugin:'''&lt;br /&gt;
&lt;br /&gt;
The ATU plugin is a server service management configuration tool. It supports generic systems as well as Proxmox VE. It is responsible for boot management and the configuration and cluster configuration synchronisation with the openQRM server. Orchestrating the system service start/stop with configuration synchronisation with the openQRM server. This is a vital plugin for tmpfs based operating systems. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''About openQRM:''' &lt;br /&gt;
&lt;br /&gt;
openQRM is available in both community and enterprise versions. Both versions are open source with the enterpise package available for commercial support and numerous additional plugins being available. With over 60 plugins available openQRM manages storage, network, monitoring, cloud, management and virtualisation. It is the toolkit of choice for data centres.&lt;br /&gt;
[[Category:Howto]]&lt;br /&gt;
[[Category:Tutorial]]&lt;br /&gt;
[[Category:Debian]]&lt;/div&gt;</summary>
		<author><name>Stvsyf</name></author>
	</entry>
</feed>