Administrators: Rasmus, Erki A, Lauri
4-node blade cluster, storage is backed by nas.k-space.ee.
Default SSH username is ubuntu
debian
.
Default IPv6 is currently a single-IP abnormality, fitting for the Zoo network. As long as it doesn't conflict (machines site is broken as of 2024-02, so you can't tell! If it works, it's probably fine) with existing IPs, you are welcome to change it. See also: prefix delegation
If not already done after confirmation (or IP change), claim ownership of your machine at members site
Used to:
If you remove qemu-guest-agent:
nas
ISO storage is shared and accessible to all PVE users. Integerity of images is not guranteed!
nas
storage → ISO Images
.Storage comes from nas.k-space.ee.
IPv4 193.40.103.100
.. 193.40.103.199
and 2001:bb8:4008:20::100/64
.. 2001:bb8:4008:20::199/64
is reserved for proxmox VMs with IDs 100..199.
Creating VM for use:
155
gets 193.40.103.155
.K-SPACE_VM_USER
/etc/apt/sources.list.d/
, then add deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
, make sure to update bullseye
to whatever is current.Failed.
2021-02-21 17:44:11 migration status: completed
2021-02-21 17:44:14 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve1' root@2001:bb8:4008:21:172:21:20:1 qm unlock 156
2021-02-21 17:44:14 ERROR: failed to clear migrate lock: Configuration file 'nodes/pve1/qemu-server/156.conf' does not exist
2021-02-21 17:44:14 ERROR: migration finished with problems (duration 00:00:32)
TASK ERROR: migration problems
Go to the shell of the sender, run:
qm unlock <id of the vm stuck, ex 111>
If the command output is Configuration file 'nodes/pve2/qemu-server/156.conf' does not exist
, run it on the recieving end instead. If that succeeds fine, you don't have to hibernate.
Hibernate the VM in question on the sender node. (broadcasts and sets/overwrites various doings of the VM, hibernate is the least costly)
Resume the VM.
Re-attempt migration. This attempt will fail. (cleanup)
Re-attempt migration.
first block by Rasmus, failed attempt in getting 24.04 nicely working
# Ubuntu: pretty bad with systemd-boot, grub2 doesn't like newer xfs
# VM 9028 has already been created
some=noble-server-cloudimg-amd64-root.tar.xz # changes with release
wget https://.../"$some.tar.xz"
modprobe nbd max_part=8
qemu-nbd --connect /dev/nbd0 /mnt/pve/nas/images/9028/vm-9028-disk-0.raw
gdisk /dev/nbd0
# o
# n, end: +512M, type ef00
# n
# c: 1: EFI, 2: cloudimg-rootfs
# w
mkfs.fat /dev/nbd0p1
mkfs.xfs /dev/nbd0p2
mkdir /mnt/cr
mount /dev/nbd0p2 /mnt/cr
tar -xf "$some.tar.xz" -C /mnt/cr
mkdir /mnt/cr/boot/efi
mount /dev/nbd0p1 /mnt/cr/boot/efi
# modify /mnt/cr/etc/fstab to use xfs
mount -o bind /proc /mnt/cr/proc
mount -o bind /sys /mnt/cr/sys
mount -o bind /dev /mnt/cr/dev
mv /mnt/cr/etc/resolv.conf /mnt/cr_resolv.conf
cp /etc/resolv.conf /mnt/cr/etc/resolv.conf
chroot /mnt/cr
apt update && apt -y full-upgrade
apt install -y grub-efi linux-generic
grub-install /dev/nbd0
update-grub
echo unattended-upgrades unattended-upgrades/enable_auto_updates boolean true | debconf-set-selections
dpkg-reconfigure -f noninteractive unattended-upgrades
pro config set apt_news=false
apt install -y sshguard
apt install -y qemu-guest-agent
exit
umount /mnt/cr/{proc,sys,dev,boot/efi}
rm /mnt/cr/etc/resolv.conf
mv /mnt/cr_resolv.conf /mnt/cr/etc/resolv.conf
umount /mnt/cr
qemu-nbd --disconnect /dev/nbd0
rmmod nbd
mkfs.xfs /dev/sda
rsync -av --numeric-ids /official /blank
mount --bind /dev/ /blank/dev
mount --bind /sys /blank/sys
mount --bind /proc /blank/proc
chroot /blank
grub-install /dev/sda
pro config set apt_news=false # disables Ubuntu Pro ads
apt-get update
apt-get install sshguard unattended-upgrades -y
sshguard -w 172.16.0.0/12 # https://sshguard.net/docs.html
dpkg-reconfigure -plow unattended-upgrades # https://wiki.debian.org/UnattendedUpgrades
apt-get full-upgrade -y
exit
reboot
Reset machine-id:
rm /etc/machine-id # might be enough
systemd-machine-id-setup # do not run for template, only after cloning
/mnt/pve/nas/jc/adding_a_node.md