This article will show you how simple it is to enable GPU passthrough on your Proxmox VE 6.0 host. My best experience have been with AMD GPUs, specifically the AMD Radeon Vega 56 and the AMD Radeon RX 580. Here will use an integrated Intel GPU, though, in an old Intel NUC. This guide can also be used to passthrough other devices such as NICs.

This article assumes your hardware has the necessary support for virtualization, IOMMU, VFIO, and so on, and that your hardware is running Proxmox VE 6.0.

The process for enabling GPU passthrough on other Debian based Linux distributions (including Debian itself) should be really similar.

Retrieve GPU device IDs

Run the following command to get the device IDs (note: if lspci is missing on your system, you can install it by running sudo apt install pciutils).

$ sudo lspci -nnk | grep "VGA\|Audio"

It should yield a result like this:

00:02.0 VGA compatible controller [0300]: Intel Corporation Haswell-ULT Integrated Graphics Controller [8086:0a16] (rev 09)
00:03.0 Audio device [0403]: Intel Corporation Haswell-ULT HD Audio Controller [8086:0a0c] (rev 09)
	Subsystem: Intel Corporation Haswell-ULT HD Audio Controller [8086:2054]
00:1b.0 Audio device [0403]: Intel Corporation 8 Series HD Audio Controller [8086:9c20] (rev 04)
	Subsystem: Intel Corporation 8 Series HD Audio Controller [8086:2054]

What we are interested in are the values at the end of lines 1 and 2, 8086:0a16 and 8086:2054, respectively. Make sure that the device type is VGA compatible controller. The first value is the device ID for the GPU and the second value is the device ID for the audio device. It’s not necessarily needed our example case, but on some systems, for example using the AMD GPUs mentioned above, you’ll have to passthrough both the GPU and the associated audio device for it to work properly.

Enable device passthrough

Load the modules vfio, vfio_iommu_type1, vfio_pci and vfio_virqfd and enable VFIO by adding the device IDs as options for the VFIO module in modprobe.

$ sudo echo "vfio" > \
  /etc/modules-load.d/vfio.conf
$ sudo echo "vfio_iommu_type1" >> \
  /etc/modules-load.d/vfio.conf
$ sudo echo "vfio_pci" >> \
  /etc/modules-load.d/vfio.conf
$ sudo echo "vfio_virqfd" >> \
  /etc/modules-load.d/vfio.conf
$ sudo echo "options vfio-pci ids=8086:0a16,8086:2054" > \
  /etc/modprobe.d/vfio.conf

Update the initramfs images using update-initramfs:

$ sudo update-initramfs -u -k all

Edit the GRUB bootloader configuration by running:

$ sudo vi /etc/default/grub

Make sure that the line that starts with GRUB_CMDLINE_LINUX_DEFAULT has intel_iommu=on (if using an Intel CPU) or amd_iommu=on (if using an AMD CPU). It should look like this on a newly installed Proxmox VE 6.0 host:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

Then apply your new GRUB bootloader configuration by running:

$ sudo update-grub

And finally sudo reboot. Next we’ll add the GPU to an already existing virtual machine using the Proxmox VE 6.0 web interface.

Add a GPU to a virtual machine

Select your virtual machine in the web interface under your newly configured host. Power down the VM. Then go to “Hardware”, the “Add” menu and choose “PCI Device”.

In this example the GPU is called “Haswell-ULT Integrated Graphics Controller” (remember the lspci -nnk command from before? The name is the same!). Select the GPU and check the boxes for “All Functions”, “Primary GPU” then finally finally press the “Add” button.

Also, the Proxmox VE documentation recommends setting q35 as the machine type, and enabling OVMF instead of SeaBIOS, and PCIe instead of PCI.

Last Words

There we are! No so hard, right? Power up and enjoy your fast GPU enhanced VM! If it doesn’t work, please see the Trubleshooting section below and the Proxmox VE documentation, otherwise please contact me and I’ll try to help!

Troubleshooting

  1. If are having issues with stability or booting, try disabling memory ballooning for the VM. I’ve had to do it on certain occasions.

  2. If you’re using ZFS on root and booting using UEFI, systemd-boot will be used instead of GRUB. Run the following after reboot to see if the kernel picked up the IOMMU kernel parameters:

    $ sudo cat /proc/cmdline
    

Add the output of the command above together with your IOMMU parameters at the end. On Intel:

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on

On AMD:

root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on

Finally update systemd-boot:

$ sudo pve-efiboot-tool refresh

Revision

2019-10-25 Updated the troubleshooting section on what to do if it doesn’t work when running Proxmox VE with ZFS on root while booting using UEFI. Also added suggestions on VM settings. Thanks to thenickdude for the input!