Difference between revisions of "How to build Proxmox tmpfs image"

From openQRM
(added code to tty config)
(add code to tty config)
Line 34: Line 34:
 
## implement noclear for getty/inittab;
 
## implement noclear for getty/inittab;
 
### mkdir -p /etc/systemd/system/getty@tty1.service.d/
 
### mkdir -p /etc/systemd/system/getty@tty1.service.d/
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;<blockquote><nowiki><code></nowiki>  [Service] TTYVTDisallocate=no <nowiki></code></nowiki></blockquote>
+
### edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;<blockquote>
 +
<code>
 +
[Service]
 +
TTYVTDisallocate=no
 +
</code>
 +
</blockquote>
 
##'''Remember: /etc/hosts needs a valid hostname with your ip address'''
 
##'''Remember: /etc/hosts needs a valid hostname with your ip address'''
 
### This is managed with the ATU plugin
 
### This is managed with the ATU plugin

Revision as of 09:42, 28 August 2021

Once you have a successfully installed and running openQRM Server you can follow the steps below for a Proxmox VE Solution.

Or we have a pre-built Proxmox template available for download.

Requirements:

  • openQRM Community or Enterprise (can be run in a KVM/VM, suggested option)
  • optional: openQRM ATU Plugin for advanced server and cluster provisioning
  • CPU 64bit Intel EMT64 or AMD64
  • PCI(e) passthrough requires VT-d/AMD-d CPU flag support

Suggest minimum Ram for:

  • openQRM Server 4GB
  • Virtual or Hardware Node (booted via tmpfs) 8GB. 4GB for tmpfs and 4GB for OS and Services.These instructions will help you to build a Proxmox VE Solution as a tmpfs deployment. The clustering requires special initialisation which is managed by the ATU Plugin to orchestrate these steps for synchronized cluster initialisation on start and backup configuration on shutdown.
  1. Download PVE Kernel - http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.11.22-3-pve_5.11.22-6_amd64.deb
  2. Install Kernel
  3. Add Kernel to openQRM
    1. (Replace KERNEL_NAME, KERNEL_VER, OPENQRM_UI_USER, OPENQRM_UI_PASS, SERVER_NAME with the appropriate variables) openqrm kernel add -n KERNEL_NAME -v KERNEL_VER -u OPENQRM_UI_USER -p OPENQRM_UI_PASS -l / -i initramfs
    2. openqrm kernel add -n pve-5.11.22-6 -v 5.11.22-3-pve -u OPENQRM_USER -p OPENQRM_PASS -l / -i initramfs
      1. If you are using a self signed cert you may need to load the https call back manually; https://SERVER_NAME/openqrm/base/server/kernel/kernel-action.php?kernel_command=new_kernel&kernel_name=KERNEL_NAME&kernel_version=KERNEL_VER
  4. Create Image - To create an image for Proxmox which can be used as a tmpfs image, follow these steps;
    1. apt-get install debootstrap
    2. Create directory mkdir -p /exports/proxmox_image/dev/pts
    3. debootstrap --arch amd64 buster /exports/proxmox_image/ https://deb.debian.org/debian/
    4. mount --bind /dev/pts /exports/proxmox_image/dev/pts
    5. mount --bind /dev/ /exports/proxmox_image/dev/
    6. mount --bind /proc /exports/proxmox_image/proc
    7. mount --make-rprivate /exports/proxmox_image/
    8. mount --bind /var/run/dbus /exports/proxmox_image/var/run/dbus
    9. chroot /exports/proxmox_image
    10. apt-get install wget net-tools screen locales collectd
    11. dpkg-reconfigure locales
    12. Follow steps (Start at "Install Proxmox VE") @ https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
      1. We do not need to install grub
    13. set root password; passwd
    14. implement noclear for getty/inittab;
      1. mkdir -p /etc/systemd/system/getty@tty1.service.d/
      2. edit file; /etc/systemd/system/getty@tty1.service.d/noclear.conf add contents;

[Service] TTYVTDisallocate=no

    1. Remember: /etc/hosts needs a valid hostname with your ip address
      1. This is managed with the ATU plugin
    2. exit chroot, type exit
    3. umount binds;
      1. umount /exports/proxmox_image/dev/pts
      2. umount /exports/proxmox_image/dev
      3. umount /exports/proxmox_image/proc
      4. umount /exports/proxmox_image/var/run/dbus
    4. For reference only; since Proxmox/Debian uses systemd management of services needs to be done externally of the chroot. To find enabled services;
      1. systemctl list-unit-files --root /exports/proxmox_image/  | grep enabled
    5. These services are managed by the ATU plugin. Since the ATU plugins manages cluster initialisation these services need to be started in an orderly fashion by the plugin. So then remove services from startup, systemd is not friendly, so we need to point systemctl to the root directory as follows;
      1. /bin/systemctl disable pve-cluster.service corosync.service pve-guests.service --root /exports/proxmox_image/
      2. /bin/systemctl disable lvm2-lvmpolld.socket lvm2-monitor.service --root /exports/proxmox_image/
      3. /bin/systemctl disable lxc.service lxc-net.service lxcfs.service lxc-monitord.service --root /exports/proxmox_image/
      4. /bin/systemctl disable portmap.service rpcbind.service nfs-client.target --root /exports/proxmox_image/
      5. /bin/systemctl disable iscsid.service iscsi.service open-iscsi.service --root /exports/proxmox_image/
      6. /bin/systemctl disable pve-firewall.service pvefw-logger.service pvesr.timer pve-daily-update.timer --root /exports/proxmox_image/
      7. /bin/systemctl disable pve-ha-crm.service pve-ha-lrm.service pve-lxc-syscalld.service --root /exports/proxmox_image/
      8. /bin/systemctl disable pvebanner.service pvedaemon.service pvenetcommit.service --root /exports/proxmox_image/
      9. /bin/systemctl disable pveproxy.service pvestatd.service --root /exports/proxmox_image/
      10. /bin/systemctl disable qmeventd.service spiceproxy.service ssh.service --root /exports/proxmox_image/
      11. /bin/systemctl disable rsyslog.service syslog.service --root /exports/proxmox_image/
      12. /bin/systemctl disable smartd.service dm-event.socket rbdmap.service --root /exports/proxmox_image/
      13. /bin/systemctl disable ceph.target ceph-fuse.target frr.service --root /exports/proxmox_image/
      14. /bin/systemctl disable zfs.target zfs-mount.service zfs-share.service  --root /exports/proxmox_image/
      15. /bin/systemctl disable zfs-import.target zfs-import-cache.service zfs-volumes.target zfs-volume-wait.service zfs-share.service --root /exports/proxmox_image/
      16. If you have ceph installed disable;
        1. /bin/systemctl disable ceph-crash.service ceph-mds.target ceph-mgr.target ceph-mon.target ceph-osd.target remote-fs.target --root /exports/proxmox_image/
    6. If using the ATU Plugin then disable services: pvedaemon, pve-proxy, pve-manager, pve-cluster, cman, corosync, ceph, pvestatd, qemu-server, rrdcached, spiceproxy,
  1. Activate NFS Storage (if not already done so)
    1. Under Plugins -> Storage -> NFS-Storage
    2. Add NFS Storage;
    3. name "openqrm-nfs"
    4. Deployment Type: "nfs-deployment"
  2. Add NFS Volume (this triggers tmpfs storage)
    1. Under Plugins -> Storage -> NFS-Storage -> Volume Admin -> Edit -> proxmox_image "ADD IMAGE"
  3. restart server/vm in case of duplicate services started from chroot image initialisation
  4. Now create a TmpFs-Storage: Plugins -> Storage -> Tmpfs-storage -> Volume Admin -> New Storage
    1. Name: openqrm-tmpfs
    2. Deployment Type: tmpfs-storage
  5. Now Create an Image: Components -> Image -> Add new Image -> Tmpfs-root deployment -> click edit on the "openqrm-tmpfs" -> Click "ADD NEW VOLUME"
    1. Name: pve6
    2. Size: 4 GB
    3. Description: proxmox ve 6
  6. Now you will need to link a resource to a server. A resource is a blank system/server/chassis and a Server is a configuration applied to a resource/blank system. So you can either manually add a server or if a system has booted via dhcp/pxe then that system will be selectable and named "idle" for this next step.
    1. Click "ADD A NEW SERVER"
    2. Select the resource or manually setup a server
    3. then select n image for server, select the pve5 = tmpfs-deployment as previously setup (leave the tick on edit image details after selection.
    4. then select "Install from NAS/NFS" select the "proxmox_image" as above then click submit
    5. then select the kernel pve-5.11.22-6 then click submit
    6. Done
  7. Tar Image
    1. mkdir -p /usr/share/openqrm/web/boot-service/tmpfs/
    2. cd /exports/proxmox_image
    3. tar --exclude=usr/src --exclude=var/lib/apt/lists --exclude=usr/lib/jvm --exclude=var/lib/apt/lists --exclude=usr/share/man --exclude=usr/share/doc --exclude=usr/share/icons --numeric-owner -czf /usr/share/openqrm/web/boot-service/tmpfs/proxmox_image.tgz .
  8. Create NFS Image link to TmpFS image
  9. Then boot a KVM or Physical server via pxe/network boot
  10. The server will become an idle resource, once in this state a "Server" can be provisioned.
  11. Goto "Servers" and "ADD A NEW SERVER", you will need to;
    1. Name the Server
    2. Select the Server from a list, you will note that there will be an idle resource in the list, select that.
    3. Then you will need to select the image and kernel as created above.
  12. You can then start the server, once started the idle resource will reboot and boot the image as created above
  13. Once booted you may need to restart sshd and pve-cluster
    1. systemctl restart ssh pve-cluster

The ATU plugin manages the cluster and ceph configation states and boot orders.


Notes/Customisations:

  1. set root password (otherwise not able to login)
  2. Postfix may error a warning on boot, edit /etc/mailname
  3. Create directories /exports/custom/{fstab|modules|network}


Optional:

The ATU Plugin is optimised for Proxmox Cluster Deployments and TMPFS Server Configuration Sync Initialise ATU plugin