Table of Contents
- Installing Proxmox on a dedicated server
- Post-installation configuration
- Setting up ZFS full-disk encryption
- Setting up dropbear for unlocking remotely
Proxmox VE is a very good virtualization platform, but it's lacking a feature that's pretty important in my opinion: full-disk encryption.
Although it is not supported out of the box, it is still possible to achieve as Proxmox is based on Debian, so let's see how to set it up!

Although this article is mainly aimed towards using Hetzner dedicated servers, the process is similar for other providers or even bare metal servers.
Installing Proxmox on a dedicated server
First of all, let's install Proxmox Virtual Environment on our newly rented dedicated server. But to install a new OS on a server, we need physical access to it or some kind of remote access, right? Well, on Hetzner, one can book an engineer to setup a KVM console for a server for a few hours, but that's not very practical and only free the first time we ask for it. So instead of that, we are going to use the built-in Hetzner rescue system, which contains tools like curl, qemu and zfs that we will use as a kind of remote access.

Once booted in the rescue system shell, we can start download the Proxmox VE ISO image:
curl https://enterprise.proxmox.com/iso/proxmox-ve_8.4-1.iso -o proxmox-ve.isoAnd now, we can start QEMU, passing through both our NVMe drives and adding the Proxmox ISO to boot from:
qemu-system-x86_64 -daemonize -enable-kvm -m 4096 -cpu host -smp 8 -drive file=/dev/nvme0n1,format=raw,cache=none,index=0,media=disk -drive file=/dev/nvme1n1,format=raw,cache=none,index=1,media=disk -cdrom proxmox-ve.iso -boot d -vnc :0,password=on -k fr -monitor telnet:127.0.0.1:4444,server,nowaitWe can then set a VNC password to make sure that nobody except us can connect to the video console during our installation:
echo "change vnc password <VNC_PASSWORD>" | nc -q 1 127.0.0.1 4444Now, we can connect to our dedicated server public IP on the port 5900, using a VNC software and the password we've just set, and start installing Proxmox as if we were physically in front of the server!

We can follow the Proxmox installation steps as usual, but once asked about the storage configuration, we need to select RAID 1, which is the ZFS mirror setup, selecting both our NVMe drives as RAID members.

After that, the installation can continue until being greeted by the login prompt, meaning that it was successful!

We can now shutdown Proxmox and go back to the rescue system prompt to start the post-installation configuration.
To forcefully stop the QEMU Virtual Machine without using shutdown, we can send the quit command to the QEMU process: printf "quit\n" | nc 127.0.0.1 4444.
Post-installation configuration
Let's start back our Proxmox server but this time without the Proxmox ISO and with a port forward option to map port 2222 on our host IP to our Proxmox's SSH port so that we can SSH into it:
qemu-system-x86_64 -daemonize -enable-kvm -m 4096 -cpu host -smp 8 -drive file=/dev/nvme0n1,format=raw,cache=none,index=0,media=disk -drive file=/dev/nvme1n1,format=raw,cache=none,index=1,media=disk -vnc :0,password=on -k fr -monitor telnet:127.0.0.1:4444,server,nowait -net user,hostfwd=tcp::2222-:22 -net nic echo "change vnc password <VNC_PASSWORD>" | nc -q 1 127.0.0.1 4444Now, let's SSH into the Proxmox host using the port 2222 on the public IP of our dedicated server.
Configuring basic networking
On the rescue system, we need to take note of the real name of the network interface used by the server to connect on the internet. It can be obtained by running ip a and finding the interface which has the dedicated server's public IP tied to it. In general, it's something like enpXXsX or enpXXsXfX.
With that, we can go back on the QEMU Proxmox install and make a backup of the current /etc/network/interfaces file somewhere safe and accessible like /etc/network/interfaces.bak as we will need it again later in this article to get internet working when using the rescue+qemu system.
Then we can set the server's public IP as the bridge IP in the /etc/network/interfaces configuration file, so when the server boots it has networking and we can directly log in through the web-based control center:
auto lo iface lo inet loopback iface <the public network interface> inet manual auto vmbr0 iface vmbr0 inet static address <the server public IP> netmask <the server public IP netmask> gateway <the server network gateway> bridge-ports <the public network interface> bridge-stp off bridge-fd 0Information about the dedicated server's public IP and its gateway or netmask can be obtained on the Hetzner administration panel.
Once this done, we can poweroff the Proxmox server and reboot out of the rescue system. If the network configuration was correct, we should be able to connect to Proxmox by going to https://<the server public IP>:8006 in a web browser.
On this newly installed Proxmox server, we can update its DNS servers, then enable or disable the enterprise repositories as needed, and do a full upgrade of all the packages to make sure that everything is up-to-date before we proceed with the actual data encryption.

At this point, do not start installing anything VM or LXC-related.
Setting up ZFS full-disk encryption
Once the server in a clean state, we can reboot back in the rescue system, and run the zfs command once to install OpenZFS on the rescue system:
root@rescue ~ # zfs list The Hetzner Rescue System does not come with preinstalled ZFS support, however, we will attempt to compile and install the latest release for you. Please read the information below thoroughly before entering any response. ATTENTION This script will attempt to install the current OpenZFS release which is available in the OpenZFS git repository to the Rescue System. If this script fails, do not contact Hetzner Support, as it is provided AS-IS and Hetzner will not support the installation or usage of OpenZFS due to License incompatiblity (see below). Due to github.com limitations, this script only works via IPv4. Licenses of OpenZFS and Linux are incompatible OpenZFS is licensed under the Common Development and Distribution License (CDDL), and the Linux kernel is licensed under the GNU General Public License Version 2 (GPL-2). While both are free open source licenses they are restrictive licenses. The combination of them causes problems because it prevents using pieces of code exclusively available under one license with pieces of code exclusively available under the other in the same binary. Please be aware that distributing of the binaries may lead to infringing. Press y to accept this.Here, we can press y to accept said terms, and once this is done, we can import the Proxmox ZFS root pool:
zpool import -f rpoolWith the pool imported, we can start by setting a few handy options:
zpool set autoexpand=on rpool zpool set autotrim=on rpool zpool set failmode=wait rpoolNow starts the game of ZFS musical chairs where we:
- take a recursive ZFS snapshot of a pool
- send a copy of this ZFS snapshot elsewhere
- destroy recursively said pool
- recreate the pool with encryption enabled and configured
- bring back every snapshot pool data to its original location
- set any option like mountpoint if needed
- destroy recursively the pool copy
- export the pool
First, we need to do it on the root pool, before moving to the children pools.
For the ROOT pool, these command will do the job:
zfs snapshot -r rpool/ROOT@copy zfs send -R rpool/ROOT@copy | zfs receive rpool/copyroot zfs destroy -r rpool/ROOT zfs create -o acltype=posix -o atime=off -o compression=zstd-3 -o checksum=blake3 -o dnodesize=auto -o encryption=on -o keyformat=passphrase -o overlay=off -o xattr=sa rpool/ROOT zfs send -R rpool/copyroot/pve-1@copy | zfs receive -o encryption=on rpool/ROOT/pve-1 zfs destroy -r rpool/copyroot zfs set mountpoint=/ rpool/ROOT/pve-1Then, while staying in the rescue system, we can start our Proxmox install using the QEMU command we already used earlier:
qemu-system-x86_64 -daemonize -enable-kvm -m 4096 -cpu host -smp 8 -drive file=/dev/nvme0n1,format=raw,cache=none,index=0,media=disk -drive file=/dev/nvme1n1,format=raw,cache=none,index=1,media=disk -vnc :0,password=on -k fr -monitor telnet:127.0.0.1:4444,server,nowait -net user,hostfwd=tcp::2222-:22 -net nic echo "change vnc password <VNC_PASSWORD>" | nc -q 1 127.0.0.1 4444As the root ZFS pool, which contains our Proxmox install, is now encrypted, we need to decrypt it before the server can continue booting, so we need to connect to it via a VNC console as we did earlier and unlock its boot drive using the passphrase we provided earlier to the zfs create command.
Once unlocked and booted, we can log in through console and swap our backed-up interfaces files to get internet working with the help of a systemctl restart networking, then connect using SSH to the Proxmox "VM".
On there, we need to create the passphrase file for the data pool, which is responsible for storing the VM disks on a default Proxmox ZFS installation.
tr -dc \[:alnum:\] < /dev/urandom | head -c 64 > /.data.key chmod 400 /.data.key chattr +i /.data.keyThen we can start encrypting the data pool:
zfs snapshot -r rpool/data@copy zfs send -R rpool/data@copy | zfs receive rpool/copydata zfs destroy -r rpool/data zfs create -o acltype=posix -o atime=off -o compression=zstd-3 -o checksum=blake3 -o dnodesize=auto -o encryption=on -o keyformat=passphrase -o keylocation=file:///.data.key -o overlay=off -o xattr=sa rpool/dataIf we have any VM configured on Proxmox, we need to transfer them as well:
# Use "zfs list" to list all VM disks and transfer them accordingly. zfs send -R rpool/copydata/vm-100-disk-0@copy | zfs receive -o encryption=on rpool/data/vm-100-disk-0We can delete all @copy pools using the zfs list and zfs destroy commands.
And finally we can destroy the copydata pool once all pools have been transferred:
zfs destroy -r rpool/copydataWe can then do a similar operaton with the var-lib-vz pool, which contains by default the VM ISOs and LXC images, but this time we copy its data manually:
zfs snapshot -r rpool/var-lib-vz@copy zfs send -R rpool/var-lib-vz@copy | zfs receive rpool/copy-var umount /var/lib/vz zfs destroy -r rpool/var-lib-vz tr -dc \[:alnum:\] < /dev/urandom | head -c 64 > /.var-lib-vz.key chmod 400 /.var-lib-vz.key chattr +i /.var-lib-vz.key zfs create -o acltype=posix -o atime=off -o compression=zstd-3 -o checksum=blake3 -o dnodesize=auto -o encryption=on -o keyformat=passphrase -o keylocation=file:///.var-lib-vz.key -o overlay=off -o xattr=sa rpool/var-lib-vz mkdir /mnt/varlibvz zfs set mountpoint=/mnt/varlibvz rpool/copy-var mount -a mv /mnt/varlibvz/images/ /var/lib/vz/ mv /mnt/varlibvz/dump/ /var/lib/vz/ mv /mnt/varlibvz/template/iso/* /var/lib/vz/template/iso/ umount /mnt/varlibvz rmdir /mnt/varlibvz zfs destroy -r rpool/copy-varOnce all of this is done, we should have all pools encrypted with their own key, which can be confirmed using the zfs get encryption command:
# zfs get encryption NAME PROPERTY VALUE SOURCE rpool encryption off default rpool/ROOT encryption aes-256-gcm - rpool/ROOT/pve-1 encryption aes-256-gcm - rpool/data encryption aes-256-gcm - rpool/var-lib-vz encryption aes-256-gcm -I would highly suggest backing up the /.data.key and /.var-lib-vz.key files somewhere safe, as these are the only encryption keys which can unlock these pools. We lose the key, we lose the pool.
In order to automatically unlock these two pools automatically with their keyfile at boot, we can create the /etc/systemd/system/zfs-load-keys.service service file with the following content:
[Unit] Description=Load encryption keys DefaultDependencies=no After=zfs-import.target Before=zfs-mount.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/sbin/zfs load-key -a [Install] WantedBy=zfs-mount.serviceAnd then we can enable the service to start automatically at boot:
systemctl enable zfs-load-keysWe're nearly there! Now we just need to be able to unlock the root pool remotely!

Setting up dropbear for unlocking remotely
Now that everything is encrypted and setup, we need a way to unlock the root pool remotely, and this is where dropbear comes in clutch.
First, let's install it:
apt install --no-install-recommends dropbear-initramfsThen, for its configuration, we need to give it our authorized SSH keys in the /etc/dropbear/initramfs/authorized_keys configuration file. After that, we can configure its network access to be able to SSH into it by puting this (with the correct values, of course) at the end of the /etc/initramfs-tools/initramfs.conf configuration file:
IP=<IP Address>::<Gateway>:<Netmask>:<Hostname>And finally, we can update the initramfs with the following command:
update-initramfs -uLet's put back the real interfaces file containing the actual server public IP and keep somewhere safe the other file in case we ever need to boot back in QEMU to debug something later on, then poweroff Proxmox and reboot the server to its real OS.
After a while, we should be able to SSH into a Busybox environment, in which we can run the zfsunlock command to unlock the root pool.
BusyBox v1.35.0 (Debian 1:1.35.0-4+b4) built-in shell (ash) Enter 'help' for a list of built-in commands. ~ # zfsunlock Unlocking encrypted ZFS filesystems... Enter the password or press Ctrl-C to exit. 🔐 Encrypted ZFS password for rpool/ROOT: Password for rpool/ROOT accepted. Unlocking complete. Resuming boot sequence... Please reconnect in a while. ~ # Connection to <> closed by remote host.And after a while, the server should have fully booted and be accessible! Congratulations, this server is now fully encrypted at rest, using both ZFS and Proxmox!
Sources and documentation used for writing this article:
.png)

