Back

Show HN: vGPU and SR-IOV on consumer GPUs

203 points2 yearsarccompute.com
csdvrx2 years ago

That's very impressive!

You may want to do the same for NVMe: creating several namespaces is not supported on most consumer drives, while laptops can rarely have more than 1 NVMe (same problem as with the GPUs: a passthrough requires having 2 of them)

Being able to split the NVMe drive not by partition but by namespace would let each OS see a "full drive".

ArcVRArthur2 years ago

That's actually a great idea!

Is there any documentation you would suggest I read to get started understanding NVMe namespaces?

We're also very open to feature additions/pull requests at our repo: https://libvf.io/ I'm very new to this whole thing of building an open source community but I hope if people find value in some of what they built that they might consider helping us with our code.

If you have any suggestions on how we could improve and build a good open source community I'd love to hear them!

csdvrx2 years ago

> That's actually a great idea!

Thanks! Besides being technically interesting and similar to what you do, I think it would also be quite useful for your target audience: if they don't want to do a GPU passthrough, maybe it's because they only have 1 GPU. Maybe they also have only 1 physical drive!

> Is there any documentation you would suggest I read to get started understanding NVMe namespaces?

Just check the official NVMe specs. You can also grab a drive that supports namespaces to see how it works in practice.

It's not very complicated: if you have only 1 NVMe that supports namespaces, and on which you create 3 namespaces, you will have /dev/nvme0n1 /dev/nvme0n2 and /dev/nvme0n3 which can each be partitioned. If you pass /dev/nvme0n2 to a VM1 and /dev/nvme0n3 to another VM, you can be sure that neither will be able to write on the other VM volume, or to the partitions used by the host. It's that simple!

So don't waste too much time on the official specs: the goal for your driver would be just to recognize partitions of a special type, intercept them, and present them to the kernel as if they were a namespace (so they can be passed to virtual machines as if they were a namespace...) for people who have NVMe drives that don't support namespaces (the same thing could be used for regular SATA spinning drives and SSD, but the smaller IOPS compared to NVMe would make it less useful in practice)

Under the hood, you will also want to implement logical block checks to enforce barriers (just like the IOMMU!) so that VM1 can't write blocks on the VM2 virtual drive, and vice versa.

Now that I think about it, it should be quite simple as partitions have a start point and an end point : just check if the write on the virtual disk offset will be within the partition boundaries once you add the offset to map that to the physical disk: for example if /dev/nvme0n2 corresponds to /dev/nvme0n1p6 the offset would simply be the start of this partition.

Now, how to do that nicely, in a way that wouldn't confuse the host or cause problems when dual booting?

A simple way to do that could be to use a well-known GUID. A good candidate would be 024DEE41-33E7-11D3-9D69-0008C781F39F which was reserved for "MBR partition scheme" but remains mostly unused: a partition under this GUID is expected to contain a full disk image (with partitions etc) but will be ignored: https://en.wikipedia.org/wiki/GUID_Partition_Table#PROTECTIV...

On the user side, when not using your driver, each 024DEE41-33E7-11D3-9D69-0008C781F39F type partition could be mounted with losetup -P to read the partition table: https://stackoverflow.com/questions/37227233/having-losetup-...

After thinking a little more about how it would work, it should be a quite interesting project! I hope you will do that!

> We're also very open to feature additions/pull requests

Unfortunately, I don't have a lot of time for such projects at the moment but that should be enough to get you started.

> If you have any suggestions on how we could improve and build a good open source community I'd love to hear them!

Release early, release often, even if it's a bare-mininum MVP. Announce it here.

And shoot me an email at my username at outlook.com so I can do the same in other places :)

codetrotter2 years ago

Quick edit: After writing and posting the text below I went to your profile and saw that you mention ZFS in your bio actually. So probably you already know about this then. Leaving it up anyways in case someone finds it interesting.

---

Relatedly, in FreeBSD you can attach ZFS file systems to jails.

FreeBSD jails is a virtualization mechanism that uses the FreeBSD kernel of the host machine to run processes in isolation. Multiple processes can be isolated together in the same jail, or you can run them in separate jails to isolate them from each others as well. If you are familiar with Linux Namespaces and Docker, I'd say that in principle those are similar to FreeBSD jails.

And the way that ZFS works is that you have something called ZFS pools, which sit on top of one or more physical storage media. For example you might have a pool sitting on top of a single NVMe, or on top of multiple NVMe, or on top of one or multiple spinning disks. And with pools that sit on top of multiple storage media you can mirror or stripe the pool across the underlying storage media, depending on your needs (greater capacity vs redundancy).

Inside of a pool you have one or more ZFS file systems. These file systems can be snapshotted, and you can roll back to previous snapshots. You can also replicate ZFS file systems and snapshots between pools as well as between different hosts.

On my server I have a ZFS pool named "zroot" sitting on top of a single device. Now due to my server itself being a VPS, the device is actually a virtual device that is provided by the hypervisor that the VPS VM is running on. But in the future when I can afford to, and when requirements grow, I can migrate to a physical host where the FreeBSD installation sits directly on physical hardware and the pool has physical devices.

Anyways, on that server, inside of the ZFS pool I have some ZFS file systems for the FreeBSD host itself, and I have additional ZFS file systems on the same pool which are attached to FreeBSD jails.

In this way, storage is managed from the host at the same time as giving isolated access to portions of it to the FreeBSD jails. Aside from the benefits of being able to snapshot, rollback, send and receive, another great thing about ZFS is that the individual ZFS file systems within a pool can each use how ever much of the available space in the pool they need at any time without dedicating any specific amount of storage to any of the individual ZFS file systems. And furthermore, you are still able to define quotas for the individual ZFS file systems, to limit the max amount of space that they are allowed to consume in the pool.

And I think that all of this provides quite closely to what you are describing, although it is using ZFS mechanisms to achieve this and not being based upon NVMe namespaces.

So to show what it looks like to use ZFS with FreeBSD jails, here you can see what the FreeBSD host sees:

    $ zpool status

      pool: zroot
     state: ONLINE
    config:

     NAME        STATE     READ WRITE CKSUM
     zroot       ONLINE       0     0     0
       vtbd0p2   ONLINE       0     0     0

    errors: No known data errors

    $ zpool list

    NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    zroot  63.5G  3.56G  59.9G        -         -     1%     5%  1.00x    ONLINE  -

    $ zfs list

    NAME                      USED  AVAIL     REFER  MOUNTPOINT
    zroot                    3.56G  58.0G       96K  /zroot
    zroot/ROOT               1.98G  58.0G       96K  none
    zroot/ROOT/default       1.98G  58.0G     1.78G  /
    zroot/jail-data           528K  58.0G       96K  none
    zroot/jail-data/ifee       96K  58.0G       96K  /data
    zroot/jail-data/www       336K  58.0G      336K  /data
    zroot/jail-opt           19.7M  58.0G       96K  none
    zroot/jail-opt/ifee        96K  58.0G       96K  /opt
    zroot/jail-opt/www       19.5M  58.0G     19.5M  /opt
    zroot/tmp                 208K  58.0G      120K  /tmp
    zroot/usr                1.54G  58.0G       96K  /usr
    zroot/usr/home            529M  58.0G      529M  /usr/home
    zroot/usr/jail           1.02G  58.0G       96K  /usr/jail
    zroot/usr/jail/fullbase   640M  58.0G      639M  /usr/jail/fullbase
    zroot/usr/jail/ifee      3.63M  58.0G      641M  /usr/jail/ifee
    zroot/usr/jail/svcfw     48.4M  58.0G      686M  /usr/jail/svcfw
    zroot/usr/jail/www        354M  58.0G      991M  /usr/jail/www
    zroot/usr/ports            96K  58.0G       96K  /usr/ports
    zroot/usr/src              96K  58.0G       96K  /usr/src
    zroot/var                1.59M  58.0G       96K  /var
    zroot/var/audit            96K  58.0G       96K  /var/audit
    zroot/var/crash            96K  58.0G       96K  /var/crash
    zroot/var/log            1008K  58.0G      896K  /var/log
    zroot/var/mail            168K  58.0G      104K  /var/mail
    zroot/var/tmp             160K  58.0G       96K  /var/tmp
And here is what one of the jails sees:

    $ doas jexec www zfs list

    NAME                  USED  AVAIL     REFER  MOUNTPOINT
    zroot                3.56G  58.0G       96K  /zroot
    zroot/jail-data       528K  58.0G       96K  none
    zroot/jail-data/www   336K  58.0G      336K  /data
    zroot/jail-opt       19.7M  58.0G       96K  none
    zroot/jail-opt/www   19.5M  58.0G     19.5M  /opt

    $ doas jexec www df -h

    Filesystem             Size    Used   Avail Capacity  Mounted on
    zroot/usr/jail/www      59G    991M     58G     2%    /
    devfs                  1.0K    1.0K      0B   100%    /dev
    zroot/jail-data/www     58G    336K     58G     0%    /data
    zroot/jail-opt/www      58G     19M     58G     0%    /opt
Now let's apply some quotas.

    $ doas zfs set quota=4G zroot/usr/jail/www

    $ doas zfs set quota=5G zroot/jail-opt/www

    $ doas zfs set quota=10G zroot/jail-data/www
And we see that these quotas are then reflected inside of the jail.

    $ doas jexec www zfs list

    NAME                  USED  AVAIL     REFER  MOUNTPOINT
    zroot                3.56G  58.0G       96K  /zroot
    zroot/jail-data       528K  58.0G       96K  none
    zroot/jail-data/www   336K  10.0G      336K  /data
    zroot/jail-opt       19.7M  58.0G       96K  none
    zroot/jail-opt/www   19.5M  4.98G     19.5M  /opt

    $ doas jexec www df -h

    Filesystem             Size    Used   Avail Capacity  Mounted on
    zroot/usr/jail/www     4.6G    991M    3.7G    21%    /
    devfs                  1.0K    1.0K      0B   100%    /dev
    zroot/jail-data/www     10G    336K     10G     0%    /data
    zroot/jail-opt/www     5.0G     19M    5.0G     0%    /opt
Pretty neat.
gh123man2 years ago

This is insanely impressive. Having tried set up GPU passthrough in proxmox a few years ago, it was an absolute disaster. I would love to see this kind of approach more widely supported by other hypervisors!

It's a real shame consumer GPUs are arbitrary locked down when the enterprise counterparts (often with the exact same chip) have much better support for virtualization.

ArcVRArthur2 years ago

Ya the lock out is absolutely arbitrary. There is zero physical difference between the consumer and server chips for these features. I think actually there's a lot of benefit to consumers by having these features enabled! I talk about that a bit here in our Xorg Developer Conference 2021 talk: https://www.youtube.com/watch?v=8pVrTyLqV_I

We're going to try to add support for more distributions in the coming days.

Right now we've got support in our install script for Ubuntu 20.04 hosts and arbitrary guest operating systems (Windows guests work best so far) but if people on GitHub are posting issues asking for support for other systems I'll try my best to get to those.

I'm going to try to add official support for Arch, PopOS, and Fedora as I know some people who I think would use it on those systems and a few others.

kipchak2 years ago

Is the process of unlocking these features on Nvidia GPUs similar to something like the vgpu_unlock tool is doing?[1] No affiliation, just came across it trying to find a replacement to the deprecated RemoteFX vGPU and am out of my depth.

[1]https://github.com/DualCoder/vgpu_unlock

my1232 years ago

For replacing RemoteFX vGPU, what you might want is https://forum.level1techs.com/t/2-gamers-1-gpu-with-hyper-v-...

(which is the direct successor)

The advantage is that it ships inbox in Windows and doesn't need license hacks or anything. It works cross-vendor too.

However, it needs the host OS to be Windows (with Hyper-V being used).

ArcVRArthur2 years ago

I think Hyper-V is GPU-P is great! I think Microsoft's hypervisor team is one of the most talented in the world, and honestly I think I could learn a lot from them.

One of the benefits to using our approach instead of Microsoft's is that our tools are free open source software and (my biased opinion) I think we have an easier user setup. :)

Some day I would love to read Hyper-V's GPU-P code as I think they did a rather good job overall.

+2
kipchak2 years ago
ArcVRArthur2 years ago

vGPU_Unlock's Merged driver is an optional package you can include but if you don't want to use it there's no explicit dependance. We actually enable these features using a vendor neutral API called VFIO-Mdev:

https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/drive...

Here's a few examples of YAML for use with different GPU vendors:

Intel: https://github.com/Arc-Compute/libvf.io/blob/master/example/...

Nvidia: https://github.com/Arc-Compute/libvf.io/blob/master/example/...

AMD: https://github.com/Arc-Compute/libvf.io/blob/master/example/...

The odd one out is AMD that uses a different API due to the fact that the vendor has largely ignored standard open source interfaces in the kernel. We're still supporting that API but unfortunately there are very few AMD cards that work due to the fact that they refuse to release open source code to support their newer cards and they have locked out these features at the firmware level on consumer cards. Fortunately Nvidia and Intel GPUs are very well suited to this functionality and we've got support for most recent consumer cards from both!

+1
InvaderFizz2 years ago
kfprt2 years ago

The enterprise versions of these products have a lot of bugs and gotchas because so few use the feature.

ArcVRArthur2 years ago

I'm making this because I had the same problem. My hope is that we can move the needle a little bit.

kfprt2 years ago

As far as I'm concerned anyone that's working of virtualized GPU is doing the Lord's work.

rektide2 years ago

Really hoping AMD eventually does the right thing here. Not that it particularly matters seeing as how decent AMD video cards have been unpurchaseable for 18 months now.

Consumers should have the ability to use their hardware well too. Selling the same thing at 2X the price differentiating only on virtualization capabilities is not a moral path.

> We remain hopeful that AMD will recognize forthcoming changes in GPU virtualization with the creation of open standards such as Auxiliary Domains (AUX Domains), Mdev (VFIO-Mdev developed by Nvidia, RedHat, and Intel), and Alternative Routing-ID Interpretation (ARI) especially in light of Intel's market entrance with their ARC line of GPUs supporting Intel Graphics Virtualization Technology (GVT-g).

Really cool to hear there are a bunch of vGPU-related efforts underway! That's so great.

ArcVRArthur2 years ago

I have to pinch myself from time to time to remind myself I'm not dreaming. I honestly feel like I'm working on the coolest project in the world with some of the smartest people I've ever met. If you know anyone who wants to help us work on open source vGPU stuff please reach out to me at arthur@arccompute.com.

We're hiring!!!

encryptluks22 years ago

I am disappointed in AMD. I bought AMD for Linux but when it comes to vGPU they don't give a shit. Intel was the leaders who made this semi-mainstream.

ArcVRArthur2 years ago

It is very disappointing for sure. I think the best thing we can do right now as users and developers is to make our voices heard and to ask AMD to support VFIO-Mdev on consumer GPUs. If enough people ask I think they will dedicate the energy to supporting these new open APIs as the other vendors now have. We have to be clear that we want this on consumer gear, not some artificially locked out server card where they have relegated these capabilities to with zero technical reason to do so.

uberduper2 years ago

I'm familiar with linux virtualization, gpu passthrough, etc. I've never heard of arcd and they've made no attempt in this doc or on their git to explain what it is or why it exists as, I assume, a replacement (or wrapper?) for qemu.

My past experience with looking-glass is that it falls on its face at anything > 1440p@60Hz. I'm interested in vGPU for my linux VMs (spice is slow and sdl/gtk display is flakey) but for gaming, I don't want looking-glass and prefer to just do the passthrough thing with a KVM switch.

ArcVRArthur2 years ago

We are still trying to make the documentation better. We're actually hiring right now so if you can recommend someone who might be able to help with that I'd love to talk to them. :)

Also looking-glass is an option under introspection: in the yaml. You can use whatever type of virtual display you like best. We chose looking glass as the default because in our testing it was the most performant.

ArcVRArthur2 years ago

arcd is actually the part of the program that you call in your shell to use our library's functions. Arc Compute is the name of our company so arcd seemed like a reasonable name. We also considered calling it vfd but so far we've settled on arcd.

xt002 years ago

Yea what / where is arcd? is that a closed source portion of this stack that this depends upon? not super clear what is really open source about this stuff -- this looks like a pile of scripts to run arcd?

edit: ok thanks so its qemu-system-x86_64

Have you guys tried to run something like android in the vm on an arm host? or what are the limitations there?

ArcVRArthur2 years ago

All the source code for arcd is here: https://github.com/Arc-Compute/libvf.io/tree/master/src

:)

posix_me_less2 years ago

> sdl/gtk display is flakey

How is it compared to spice/qxl - does it have some advantages?

uberduper2 years ago

In my case with 4k monitors and virgl, it's substantially faster and a desktop env feels native. Spice feels quite laggy to me.

The downside is that the sdl or gtk interface is coupled to the running vm and if they crash, the vm goes too. And in my experience, they crash a lot. I've been unable to get a two monitor setup working with either the sdl or gtk display but it works fine via spice.

posix_me_less2 years ago

I'd like to try this combination sdl+virgl that is better than spice, how do I force it on qemu? Like this?

https://www.collabora.com/news-and-blog/blog/2019/08/28/virg...

liuliu2 years ago

Great! Still requires vGPU support and the merged driver approach last time I tried won't support CUDA on host (I was probably the first one tried the merged driver thing with vgpu_unlocked?).

Looking forward someone write a Vulkan driver on Windows that just shuttles down to the Linux host. virgl used to be a promising project ...

ArcVRArthur2 years ago

I CUDA does work on the host in our testing. :) We also run Vulkan/DirectX/OpenGL at full performance in the guest! It's WAY faster than Virgl.

awesnvadsome2 years ago

This seems awesome! I have a passthrough setup with a very old card and a much newer one for games in a windows vm, it'll be nice to look into getting this set up and I can reduce the power draw on my system which was causing some problems...

Is this something that could work with/be integrated with libvirt for easy configuration? Itd be neat to set it up with my current install, although not at all a real problem.

ArcVRArthur2 years ago

We actually are a replacement for Libvirt that tries to simplify a few things. Take a look at LibVF.IO's user API. I tried to make it a bit more human friendly, like Docker: https://github.com/Arc-Compute/libvf.io/blob/master/example/...

awesnvadsome2 years ago

Neat! Certainly more readable than the massive xml file! Thanks for the hard work!

E: The code generating the qemu commands is also quite readable, so I think migrating my install will be quite doable + If needs be i can add args manually.

ArcVRArthur2 years ago

Ya exactly! Compatibility with QEMU commands right inside the yaml was a great addition by my co-author. I wrote an earlier version that lacked that functionality and I have to say the new version with that feature has proven really handy.

encryptluks22 years ago

I wish all programs would consider YAML config files. Imagine being able to configure Chromium, Firefox, Thunderbird, Windows.. would make automation such a breeze. Chromium and Firefox has policy files but they are all JSON. I'm not even going to talk about the disaster of Windows automation.

DiabloD32 years ago

I tried doing this years ago, and never quite go it to work.

Some of the software involved in that article simply didn't exist yet, and GPUs weren't shipping with SR-IOV support yet (instead, I did Intel iGPU for Linux fbcon, real AMD GPU fed directly to the Windows VM with PCI-E Passthrough). In the end, I bailed on that dream and moved the Linux install to its own smaller machine, and ran Windows bare on the big machine.

The problem was, if the GPU locked up hard, and GPUs back then would not respond to PCI Device Reset, if it wasn't something that merely re-initializing it on VM restart would fix... I had to restart the entire machine, thus defeating the purpose of having Windows in the VM in the first place!

All my long-lived processes now run on the stand-alone Linux machine, and anything that is free to explode runs on my Windows machine. Windows gets wonky? Restart, ssh back into my screen sessions, reopen the browser, restart a bunch of cloud slaved apps, tada.

ArcVRArthur2 years ago

I got into writing this actually because graphics sharing was such a bad experience that I would usually run a Windows host and then Linux guests in something like VirtualBox or VMWare Workstation. I always wanted a host with fault tolerant guests but GPU sharing features seemed to be locked to "enterprise GPUs" and it wouldn't work on my consumer GPU. This solves some of those problems I was having and lets me run Linux on my host with a full performance Windows guest (at least for me it does).

starfallg2 years ago

As the Nvidia Linux driver doesn't support DSC (Display Stream Compression), a lot of people are having problems driving 4k/240hz and 8k/60hz displays with DP 1.4 and HDMI 2.1.

Frustratingly this works fine using Windows.

Any ideas on how to run Linux guests on a Windows host while sharing the GPU like libvf.io? This would allow us work around the DSC issue until Nvidia implements support.

danbmil992 years ago

Very cool approach. It's going to be a fight I suspect; vendors lock things up specifically so they can have price differentials in different markets. It may end up being like the fight between workstations versus PCS back in the 90s

ArcVRArthur2 years ago

I actually think that Intel GVT-g has opened the vGPU flood gates. With Intel trying to differentiate themselves as a new market participant they've just decided to leave this functionality on for all the parts. I bought a 700$ prebuilt PC from Best Buy with an Intel DG1 GPU and it's got GVT-g. If you look at the kernel API for this stuff you'll notice RedHat, Nvidia, and Intel wrote it together. I think Nvidia is just adapting to what will be the new market norm as the Intel GPUs proliferate.

Things are becoming more open, not less!

That's partly why I wanted to make this free open source software, to help people take advantage of these cool new open source kernel capabilities on consumer gear!

scarygliders2 years ago

Very, very cool!

I've set up a dual-GPU system in the past using two nvidia GPUs and whilst I found the trek towards PCI passthrough to other virtual machines rewarding when it finally worked, I also found the arrangement to be inconvenient.

What you've achieved here, seems the ideal. Well done :)

I will either patiently wait for an Arch Linux version of the install, or I'll eventually end up impatient and see if I can rustle up something - an install script is an install script, it should be just a matter (famous last words) of altering the install script/procedures to suit.

ArtWomb2 years ago

Whoa! Multi GPU is so hard. This could be the start of something. Looks like it makes use of nvidia's capture api?

ArcVRArthur2 years ago

We've been working on this for awhile and ya, I hope it is the start of something! If you want to read the logical origin point of where I started thinking about this project check this out:

https://arccompute.com/blog/why-computers-suck-and-how-openb...

https://news.ycombinator.com/item?id=21776370

We're not using Nvidia's APIs in our code! libvf.io is entirely based on cross-vendor, standard APIs. :D

The one that is a bit weird is AMD who is fairly far behind Nvidia and Intel but I think if they want to modernize support they can definitely do it. My hope is that consumers will start asking for this capability more (especially with Intel's ARC GPUs enabling GVT-g at every price point) and AMD will update their stuff to reflect that demand.

compsciphd2 years ago

isn't this natively supported by nvidia?

i.e. you have a vgpu card (or a consumer card you an map to the equivalent vgpu card) and nvidia drivers and tools lets you load drivers that essetially partition it into XGB (all partitions being the same X) and then you just gpu passthrough the newly created device that maps to a single partition into the vm?

the whole trick being the ability to trick nvidia's drivers into thinking that the consumer gpu is really the server model, but otherwise it becomes just normal nvidia usage?

remless2 years ago

Yes the functionality is available on nvidia quadro cards and nvidia very recently added the same options to some of their more recent consumer cards specifically for HyperV.

I will say though from a hobbyist point of view this project is just great. If you're trying things and spinning up VMs just to see what you can do then one of the first bottle necks is going to be that you can't use dedicated graphics (without passing through the whole card, and then you need a card per VM). This project makes it a lot easier for anyone fiddling rather than doing stuff commercially, especially with current GPU prices.

zamadatix2 years ago

It's natively supported by all of the GPUs listed, the trick is indeed bringing this to the consumer space for the first time as it has always been model locked and sometimes (e.g. Nvidia) license locked. The exception to these restrictions is with the Intel method.

ArcVRArthur2 years ago

Nvidia co-wrote the VFIO-Mdev API in partnership with Intel where it is used as a kernel interface for GVT-g mediated GPU devices (as well as Nvidia mediated devices). My (totally baseless) guess would be that Nvidia saw this market change coming with Intel ARC GPUs leaving vGPU functionality enabled across the consumer lineup and rather than getting left behind by a big change they helped bring it about. We're just exposing the capabilities of that API to the user across vendors and making it easy to interface with.

ArcVRArthur2 years ago

Ya, libvf.io works on nearly all recent Nvidia consumer cards with the exception of Ampere. The same goes for Intel GPUs like the DG1/DG2. There is no need to use a server card.

Edit: There's not really an explicit need to trick the card. Nvidia co-wrote the VFIO-Mdev API with Intel and RedHat which is what we use. If you want to use the optional nv merged driver package that will do some of the things you mention but it's not required.

AaronFriel2 years ago

This article almost got my hopes up too much! I'm curious whether Intel's "Ark" GPUs will support the same or if they'll go the same path of Nvidia in locking down virtual function support.

ArcVRArthur2 years ago

They have made the APIs open on all of their discrete GPUs going forward. I actually have an Intel discrete GPU board (based on the Intel's DG1 - Discrete Graphics 1) that supports GVT-g and all the same standard APIs we deal with. I think Intel is going to be a huge disruptor in the GPU market because of this!

The good news is that we also support consumer Nvidia GPUs and some consumer AMD GPUs as well. There are more limitations in AMD land than the other vendors but I'm hopeful that they will make some improvements across time.

COGlory2 years ago

Very impressive. You should also post this to the Level1Techs forum

ArcVRArthur2 years ago

Wendell tweeted about LibVF.IO and it made my whole month haha: https://twitter.com/tekwendell/status/1449054328766013440

Also I posted in the Level1Techs forum! https://forum.level1techs.com/t/libvf-io-a-commodity-gpu-mul...

belval2 years ago

Just to clarify because I've failed at doing pretty much the setup that you are describing with my 1080Ti. Does this still require the vgpu_unlock changes for Nvidia cards or is this something that bypasses the need for it entirely?

ArcVRArthur2 years ago

My main development rig actually uses a 1080Ti!

The normal vGPU_Unlock steps aren't a part of the user setup process, rather the optional merged driver package is built with a C version of the code that's been prepackaged. If you decide to use the vGPU_Unlock merged driver option during setup you won't have to do the (somewhat intense) process of vGPU_Unlock setup, you'll just skip straight to the end result. :)

kobalsky2 years ago

Why is it optional? Why would someone want the option to use it if it’s not required?

Sorry I don’t understand well. A lot of jargon is being throw around and I haven’t touched this topic much once I got my pass through setup working

shmerl2 years ago

What's the story with SR-IOV for consumer AMD GPUs on Linux? Last time I looked into it, it was impossible to use or more like AMD didn't want to support it.

Kaze4042 years ago

Once this is running on NixOS I'll be all over it. Great news!

ArcVRArthur2 years ago

There's a guy in our Matrix and Discord channels that's working on a Nix package. :) I'll introduce you to him if you'd like.

amelius2 years ago

Does this also work for CUDA (e.g. deep learning jobs)?

ArcVRArthur2 years ago

CUDA works on the host. Stay tuned for more info about guest CUDA from libvf.io ;)

omreaderhn2 years ago

This looks really nice. Does this work with Ampere GPUs?

ArcVRArthur2 years ago

We're supporting Ampere in our own VPS cloud (arccompute.com) but there are some hurtles on getting it working on consumer Ampere gear in the nv merged driver due to changes in the use of SR-IOV APIs - that is to say Nvidia has now enabled SR-IOV whereas before they were doing their entirely different thing to achieve similar functionality. We've got SR-IOV APIs working with our use of mdevType: "sriovdev" in our yaml configuration layer so from our perspective it's already possible to use but unfortunately until the nv merged drivers work with Ampere we won't be able to do a lot on our end.. Right now I think the vGPU_Unlock guys are working on the 3090 and we'll be sure to support it once it's ready. I'll also try to remember to post about it on GitHub once I've tested it out for myself and confirmed our software works with it.

By the way, we're hiring GPU driver engineers and kernel hackers!

Please reach out to me if you know anyone who might be interested. My email is: arthur@arccompute.com

rirze2 years ago

Super thrilled to hear you guys working on the 3090! I'll be keeping an eye out for any progress

kfprt2 years ago

Note Intel GVT-g is only available on <=Gen9 GPU's. Gen11 and 12 are not supported.

ArcVRArthur2 years ago

It also works on Intel's ARC and DG1/DG2 discrete GPUs.

Our software also works on most consumer Nvidia cards and also some consumer AMD cards.

kfprt2 years ago

Do you have any specific knowledge of Intels support for graphics virtualization on ARC GPU's?

What are the security implications of LibVF and would it be compatible with the QubesOS security model?

ArcVRArthur2 years ago

Ya, they are enabling GVT-g on all their GPUs going forward including the ARC branded devices (not just the embedded GPUs with Xe branding). They have no plans to lock out those features on consumer devices. In fact I bought a $700 computer from Best Buy that had an intel discrete GPU in it for development. :)

Ya, I have been talking with the Qubes guys. I'm trying my best to figure out how I can help them ship something with LibVF.IO but they are currently based on Xen which uses Libxl rather than VFIO which may present some challenges.

+1
iforgotpassword2 years ago
orangepurple2 years ago

GVT-G has hard resolution limitations and was never stable with QEMU on Linux

ArcVRArthur2 years ago

GVT-g uses KVMgt. Intel has not yet upstreamed their patches.

tfcata2 years ago

Does this work on the Radeon 580 series? I have a 2070 but I'm reluctant to install nVidia's driver...

ncmncm2 years ago

They had me until YAML.

anentropic2 years ago

yes, but interestingly it seems to be https://nimyaml.org/ :)