QubesOS GPU Support: Finally Got It Working
Remember this post? Back then I said that there was currently no solution for using your GPU inside VMs on Qubes. Well, I found a way. Now cracking/rendering on Qubes might be more realistic than ever.
This is actually a huge deal. Hashcat and John benefit A LOT from having a GPU to work with, speeding up hash cracking by up to 500x compared to CPU.
Also, now you’ll finally be able to use local LLMs on Qubes, which adds bonus points for your privacy!
This post provided all necessary instructions that allowed me to figure it out, but I still hit a few dead ends, hence I’m writing this post to explain that guide in more detail.
What makes it so much easier than the original method is that VirtualGL lets a program render OpenGL content on a GPU and then sends the output to another machine or virtual environment. In this case, it’s used to offload the rendering to the secondary GPU and stream the frames back into the VM’s normal window in Qubes.
[ App inside GPU VM ]
↓
[ VirtualGL grabs rendered frame via EGL ]
↓
[ Sends it back into the AppVM’s X server ]
↓
[ Xen GUI proxy hands that image to dom0 ]
↓
[ You see it in a window like normal ]
Getting started
I need to say right away that for it to work you need to have both integrated graphics (provided by your CPU) and a videocard (duh!).
You will need to ask dom0 to free your GPU, and your system will render dom0 and all other VMs using our integrated CPU graphics. Then your GPU will be free to be connected to the VM of our choice and all you will have to do is to take care of drivers and VirtualGL, which is easy.
You can only assign your GPU to one VM at a time in Qubes due to how Xen handles PCI passthrough
If you don’t have integrated graphics on your CPU, like on Intel ‘F’ series, you can give up right here. Sorry, it’s not happening.
Asking dom0 to give our GPU back
First of all, to use GPU on our VM, you need to take it from dom0. dom0 doesn’t actually use our GPU, and our GPU usually just sits idle, but dom0 won’t let us touch it.
Why?
Because Qubes OS, by default, binds all hardware to dom0. If you don’t explicitly tell dom0 to ignore the discrete GPU…
👉 dom0 will grab it and you can’t pass it to any VM.
You need to know your videocard’s device ID and make sure it’s not actively used, because if you hide the graphics that your system uses to render the GUI, you won’t have any display - just a black screen and regret.
Run the following in dom0:
lspci -k | grep -A 3 VGA
You will get something like
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P GT1 [UHD Graphics] (rev 0c)
Subsystem: ...
Kernel driver in use: i915
Kernel modules: i915
--
01:00.0 VGA compatible controller: NVIDIA Corporation ...
Kernel driver in use: nouveau (or nothing)
Kernel modules: nouveau
See nouveau
? Here, dom0 touches our videocard, but doesn’t actually use it. dom0 uses only the processor, which has loaded kernel drivers here. It means it’s safe to hide our NVIDIA videocard. But do NOT try to hide Intel processor here, as it is currently used.
You can see the device ID of our videocard being 01:00.0
. Btw it can also be in 0000:01:00.0
format, in this case you should only take the 01:00.0
part.
From here the guide assumes you use Nvidia, but if you use AMD Radeon you can just repeat the steps but use different drivers.
Now open grub config
sudo vim /etc/default/grub
It can look something like this:
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=false
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-41b60e4e-9f82-4954-95f2-90794eb53c02 rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles 6.6.48-1.qubes.fc37.x86_64 x86_64 rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
GRUB_THEME="/boot/grub2/themes/qubes/theme.txt"
GRUB_CMDLINE_XEN_DEFAULT="console=none dom0_mem=min:1024M dom0_mem=max:4096M ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096"
GRUB_DISABLE_OS_PROBER="true"
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX usbcore.authorized_default=0" <-- you need this one!
. /etc/default/grub.qubes-kernel-vm-support
Here you can see two GRUB_CMDLINE_LINUX lines, if you have that too you would only need to modify the latest one.
You need to append rd.qubes.hide_pci
option, and specify your device ID you got from earlier. Again, it should be the device ID of your videocard that is not currently used by QubesOS. In our case, the modified line would look like this:
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX usbcore.authorized_default=0 rd.qubes.hide_pci=01:00.0"
The whole modified grub config will look like this:
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=false
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-41b60e4e-9f82-4954-95f2-90794eb53c02 rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles 6.6.48-1.qubes.fc37.x86_64 x86_64 rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
GRUB_THEME="/boot/grub2/themes/qubes/theme.txt"
GRUB_CMDLINE_XEN_DEFAULT="console=none dom0_mem=min:1024M dom0_mem=max:4096M ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096"
GRUB_DISABLE_OS_PROBER="true"
# This hides the NVIDIA GPU from dom0 👇
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX usbcore.authorized_default=0 rd.qubes.hide_pci=01:00.0"
. /etc/default/grub.qubes-kernel-vm-support
After doing it, run the following to apply the config, and then reboot.
grub2-mkconfig -o /boot/grub2/grub.cfg
The Drivers
After your system has booted up successfully, the hardest part is over. Now you only need to download a few drivers. Let’s create a TemplateVM for AppVM that is going to have our GPU. I heavily recommend you creating a new one, for experimenting, because installing drivers incorrectly here can brick your VM.
Clone a debian template, the best here I think is debian-12-minimal, but the choice is up to you.
Open up the template, and download dependencies for Nvidia drivers
sudo apt install gcc make libglvnd-dev pkg-config -y
If you want audio on minimal templates, check out this post
Then, use an Nvidia Manual Driver Search to find linux drivers for your GPU on Linux 64-bit, download the .run
file and transfer it to your GPU template you just created.
Also download VirtualGL, it’s here. As for 2025-07-19, the latest version is 3.1
. Download virtualgl_3.1_amd64.deb
, transfer it to the template too.
Then, and this is important, do not install Nvidia drivers with default settings, because it will touch the kernel, and the reason we don’t want to touch the kernel is because it’s provided by Qubes, and using default settings on drivers will break our template. Here it will only install the userland binaries like libcuda.so, without the kernel driver. Also don’t expect nvidia-smi
to work, because we are not installing the kernel driver.
Don’t forget to allow execution
chmod +x ./NVIDIA-Linux-x86_64-xxx.xx.run
(Where ./NVIDIA-Linux-x86_64-xxx.xx.run is your driver installer name)
Use the following command to install drivers without touching the kernel:
sudo ./NVIDIA-Linux-x86_64-xxx.xx.run --no-kernel-module --no-x-check --no-nouveau-check
Just click through warnings if there are any.
Then install the VirtualGL
sudo apt install ./virtualgl_3.1_amd64.deb
Almost done!
Now run
sudo vglserver_config
Choose 3
, then no
Now append export VGL_DISPLAY=egl
to /etc/profile
with
echo "export VGL_DISPLAY=egl" | sudo tee -a /etc/profile
Template is done!
The AppVM
Shutdown your template first.
Then create an AppVM based on that template, then:
- Disable memory balancing
- Use HVM mode
- Add your videocard as a PCI device
You may or may not need to do strict reset for PCI devices.
Usage
Run your AppVM.
Use vglrun
wrapper whenever you need to use a GPU, or use vglrun bash
to allow your GPU to be used for every command. If the app crashes, make sure it’s actually running through vglrun
.
That’s it!
I wish to express my gratitude to bamboodabpen
and the QubesOS
community for finding a creative way to use VirtualGL
, and to Neowutran
for going through hell to set up the foundation of using GPU on Qubes in general.