Die Virtualisierungsplattform Proxmox VE, ist Ende April in Version 6.4 erschienen. Neben diversen Bugfixes wurden neue Features hinzugefügt. Darunter besonders erwähnenswert:
- Live Restore über die GUI aktivierbar oder per qmrestore Befehl
- Single File Restore über den zztl. benötigten Proxmox Backup Server
- Support für Ceph Octopus 15.2.11
- Support für Ceph Nautilus 14.2.20
- diverse Verbesserungen für KVM/QEMU
Die neue Proxmox VE Version basiert auf Debian Buster 10.9, nutzt jedoch einen neueren Kernel 5.4. aus dem Long-Term-Support, bzw. optional kann der Kernel in Version 5.11 installiert werden.
Weitere neue Veresserungen in Proxmox VE 6.4
- Proxmox VE API Proxy Daemon: pveproxy listens to both IPv4 and IPv6 addresses by default. The Listening IP addresses are configurable in /etc/default/pveproxy. This can help to limit the exposure to the outside, e.g., by only binding to an internal IP.
- Container: Appliance templates or support for Alpine Linux 3.13, Devuan 3, Fedora 34, and Ubuntu 21.04. Improved handling of cgroup v2 (control group).
- External metric server: In Proxmox VE, you can define external metric servers, providing you with various statistics about your hosts, virtual guests, and storages. The new version supports InfluxDB HTTPs API and instances of InfluxDB behind a reverse proxy.
- Improved ISO installer: The boot setup for ZFS installations is now better equipped for legacy hardware. Installations on ZFS now install the boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable. Before installation, an NTP synchronization is attempted.
- Storage: Proxmox VE 6.4 now allows for adding backup notes on any CephFS, CIFS, or NFS storage. Users can also configure a namespace for accessing a Ceph pool.
- VMs (KVM/QEMU):
- Support pinning a VM to a specific QEMU machine version.
- Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation. This improves stability and guarantees that the hardware layout stays the same, even with newer QEMU versions.
- cloud-init: re-add Stateless Address Autoconfiguration (SLAAC) option to IPv6 configuration.
- Enhancements to the GUI
- Show current usage of host memory and CPU resources by each guest in the node search-view.
- Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, ensuring it is consistent with the current usage gauge.
- Firewall rules: Columns are more responsive and flexible by default.
Quelle: Proxmox Virtual Environment 6.4 available
Proxmox VE 6.4 Release Notes
Released 28. April 2021
- Based on Debian Buster (10.9)
- Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20
- Kernel 5.4 default
- Kernel 5.11 opt-in
- LXC 4.0
- QEMU 5.2
- ZFS 2.0.4 – new major version
- Virtual Machines (KVM/QEMU):
- Support pinning a VM to a specific QEMU machine version.
- Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation.
- Address issues with hanging QMP commands, causing VMs to freeze on disk resize and indeterministic edge cases.
- cloud-init: re-add Stateless Address Autoconfiguration (SLAAC) option to IPv6 configuration.
- Improve output in task log for mirroring drives and VM live-migration.
- Container
- Improved cgroup v2 (control group) handling.
- Support and provide appliance templates for Alpine Linux 3.13, Devuan 3, Fedora 34, Ubuntu 21.04.
- Backup and Restore
- Implement unified single-file restore for virtual machine (VM) and container (CT) backup archives located on a Proxmox Backup Server.
proxmox-file-restore
.- Live-Restore of VM backup archives located on a Proxmox Backup Server.
- Consistent handling of excludes for container backups across the different backup modes and storage types.
- Container restores now default to the privilege setting from the backup archive.
- Ceph Server
- Improve integration for placement group (PG) auto-scaler status and configuration.Allow configuration of the CRUSH-rule,
Target Size
andTarget Ratio
settings, and automatically calculating the optimal numbers of PGs based on this.
- Improve integration for placement group (PG) auto-scaler status and configuration.Allow configuration of the CRUSH-rule,
- Storage
- Support editing of backup notes on any CephFS, CIFS or NFS storage.
- Support configuring a namespace for accessing a Ceph pool.
- Improve handling ZFS pool by doing separate checks for imported and mounted.
- Disk Management
- Return partitions and display them in tree format.
- Improve detection of disk and partition usage.
- Enhancements in the web interface (GUI)
- Show current usage of host memory and CPU resources by each guest in a node’s search-view.
- Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, providing consistency with the units used in the current usage gauge.
- Make columns in the firewall rule view more responsive and flexible by default.
- Improve Ceph pool view, show auto-scaler related columns.
- Support editing existing Ceph pools, adapting the CRUSH-rule,
Target Size
andTarget Ratio
, among other things.
- External metric servers:
- Support the InfluxDB 1.8 and 2.0 HTTP(s) API.
- Allow use of InfluxDB instances placed behind a reverse-proxy.
- Proxmox VE API Proxy Daemon (
pveproxy
)- Make listening IP configurable (in
/etc/default/pveproxy
). This can help to limit exposure to the outside (e.g. by only binding to an internal IP). pveproxy
now listens for both IPv4 and IPv6, by default
- Make listening IP configurable (in
- Installation ISO:
- Installation on ZFS:
- if booted with legacy BIOS (non-UEFI), now also copy the kernel images to the second VFAT partition (ESP), allowing the system to boot from there with grub, making it possible to enable all ZFS features on such systems.
- set up the boot-partition and boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable.
- The installer environment attempts to do an NTP time synchronization before actually starting the installation, avoiding telemetry and cluster issues, if the RTC had a huge time-drift.
- Installation on ZFS:
- pve-zsync
- Improved snapshot handling allowing for multiple sync intervals for a source and destination pair.
- Better detection of aborted syncs, which previously caused pve-zsync to stop the replication.
- Firewall
- Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by
iptables-restore
due to its size limitations. We recommended you to create and use IP-Sets for that use case. - Improvements to the command-line parameter handling.
- Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by
Known Issues
- Please avoid using
zpool upgrade
on the „rpool“ (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.See the documentation for determining the bootloader used, if you’re unsure.Setups installed with the Proxmox VE 6.4 ISO are not affected, as there the installer always sets up an easier to handle, vfat formatted, ESP to boot.See the ZFS: Switch Legacy-Boot to Proxmox Boot Tool article about how to switch over to a safer boot variant for legacy GRUB-booted setups with ZFS as root filesystem. - New default bind address for pveproxy and spiceproxy, unifying the default behavior with Proxmox Backup Server
- With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4
0.0.0.0:8006
and IPv6[::]:8006
) by default.Should you wish to prevent it from listening on IPv6, simply configure the IPv4 wildcard as LISTEN_IP in/etc/default/pveproxy
:LISTEN_IP="0.0.0.0"
- Additionally, the logged IP address format changed for IPv4 in pveproxy’s access log (
/var/log/pveproxy/access.log
). They are now logged as IPv4-mapped IPv6 addresses instead of:192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
the line now looks like:::ffff:192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
If you want to restore the old logging format, also setLISTEN_IP="0.0.0.0"
- With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4
- Resolving the Ceph `insecure global_id reclaim` Health WarningWith Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20 we released an update to fix a security issue (CVE-2021-20288) where Ceph was not ensuring that reconnecting/renewing clients were presenting an existing ticket when reclaiming their global_id value.Updating from an earlier version will result in the above health warning.See the forum post here for more details and instructions to address this warning.
Quelle: Roadmap – Proxmox VE
Um den Kernel aus Version 6.3 auf den PVE-Kernel 5.11 der Version 6.4 zu aktualisieren, muss folgendes ausgeführt werden:
# apt install pve-kernel-5.11
Interessiert in verschiedenste IT Themen, schreibe ich in diesem Blog über Software, Hardware, Smart Home, Games und vieles mehr. Ich berichte z.B. über die Installation und Konfiguration von Software als auch von Problemen mit dieser. News sind ebenso spannend, sodass ich auch über Updates, Releases und Neuigkeiten aus der IT berichte. Letztendlich nutze ich Taste-of-IT als eigene Dokumentation und Anlaufstelle bei wiederkehrenden Themen. Ich hoffe ich kann dich ebenso informieren und bei Problemen eine schnelle Lösung anbieten. Wer meinen Aufwand unterstützen möchte, kann gerne eine Tasse oder Pod Kaffe per PayPal spenden – vielen Dank.