Back

Rust running on every GPU

474 points16 hoursrust-gpu.github.io
vouwfietsman15 hours ago

Certainly impressive that this is possible!

However, for my use cases (running on arbitrary client hardware) I generally distrust any abstractions over the GPU api, as the entire point is to leverage the low level details of the gpu. Treating those details as a nuisance leads to bugs and performance loss, because each target is meaningfully different.

To overcome this, a similar system should be brought forward by the vendors. However, since they failed to settle their arguments, I imagine the platform differences are significant. There are exceptions to this (e.g Angle), but they only arrive at stability by limiting the feature set (and so performance).

Its good that this approach at least allows conditional compilation, that helps for sure.

LegNeato14 hours ago

Rust is a system language, so you should have the control you need. We intend to bring GPU details and APIs into the language and core / std lib, and expose GPU and driver stuff to the `cfg()` system.

(Author here)

Voultapher13 hours ago

Who is we here? I'm curious to hear more about your ambitions here, since surly pulling in wgpu or something similar seems out-of-scope for the traditionally lean Rust stdlib.

LegNeato13 hours ago

Many of us working on Rust + GPUs in various projects have discussed starting a GPU working group to explore some of these questions:

https://gist.github.com/LegNeato/a1fb3e3a9795af05f22920709d9...

Agreed, I don't think we'd ever pull in things like wgpu, but we might create APIs or traits wgpu could use to improve perf/safety/ergonomics/interoperability.

Voultapher9 hours ago

Cool, looking forward to that. It's certainly a good fit for the Rust story overall, given the increasingly heterogenous nature of systems.

+2
jpc08 hours ago
junon10 hours ago

I'm surprised there isn't already a Rust GPU WG. That'd be incredible.

diabllicseagull14 hours ago

same here. I'm always hesitant to build anything commercial over abstractions, adapter or translation layers that may or may not have sufficient support in the future.

sadly in 2025, we are still in desparate need for an open standard that's supported by all vendors and that allows programming for the full feature set of current gpu hardware. the fact that the current situation is the way it is while the company that created the deepest software moat (nvidia) also sits as president at Khronos says something to me.

pjmlp12 hours ago

Khronos APIs are the C++ of graphics programming, there is a reason why professional game studios never do political wars on APIs.

Decades of exerience building cross platform game engines since the days of raw assembly programming across heterogeneous computer architectures.

What matters are game design and IP, that they eventually can turn into physical assets like toys, movies, collection assets.

Hardware abstraction layers are done once per platform, can even leave an intern do it, at least the initial hello triangle.

As for who seats as president at Khronos, so are elections on committee driven standards bodies.

ducktective11 hours ago

I think you are very experienced in this subject. Can you explain what's wrong with WebGPU? Doesn't it utilize like 80% of the cool features of the modern GPUs? Games and ambitious graphics-hungry applications aside, why aren't we seeing more tech built on top of WebGPU like GUI stacks? Why aren't we seeing browsers and web apps using it?

Do you recommended learning it (considering all the things worth learning nowadays and rise of LLMs)

MindSpunk10 hours ago

WebGPU is about a decade behind in feature support compared to what is available in modern GPUs. Things missing include:

- Bindless resources

- RT acceleration

- 64-bit image atomic operations (these are what make nanite's software rasterizer possible)

- mesh shaders

It has compute shaders at least. There's a lot of less flashy to non-experts extensions being added to Vulkan and D3D12 lately that removes abstractions that WebGPU cant have without being a security nightmare. Outside of the rendering algorithms themselves, the vast majority of API surface area in Vulkan/D3D12 is just ceremony around allocating memory for different purposes. New stuff like descriptor buffers in Vulkan are removing that ceremony in a very core area, but its unlikely to ever come to WebGPU.

fwiw some of these features are available outside the browser via 'wgpu' and/or 'dawn', but that doesn't help people in the browser.

+1
383629364811 hours ago
ants_everywhere11 hours ago

Genuine question since you seem to care about the performance:

As an outsider, where we are with GPUs looks a lot like where we were with CPUs many years ago. And (AFAIK), the solution there was three-part compilers where optimizations happen on a middle layer and the third layer transforms the optimized code to run directly on the hardware. A major upside is that the compilers get smarter over time because the abstractions are more evergreen than the hardware targets.

Is that sort of thing possible for GPUs? Or is there too much diversity in GPUs to make it feasible/economical? Or is that obviously where we're going and we just don't have it working yet?

nicoburns11 hours ago

The status quo in GPU-land seems to be that the compiler lives in the GPU driver and is largely opaque to everyone other than the OS/GPU vendors. Sometimes there is an additional layer of compiler in user land that compilers into the language that the driver-compiler understands.

I think a lot of people would love to move to the CPU model where the actual hardware instructions are documented and relatively stable between different GPUs. But that's impossible to do unless the GPU vendors commit to it.

pornel9 hours ago

I would like CPUs to move to the GPU model, because in the CPU land adoption of wider SIMD instructions (without manual dispatch/multiversioning faff) takes over a decade, while in the GPU land it's a driver update.

To be clear, I'm talking about the PTX -> SASS compilation (which is something like LLVM bitcode to x86-64 microcode compilation). The fragmented and messy high-level shader language compilers are a different thing, in the higher abstraction layers.

sim7c0010 hours ago

i think intel and amd provide ISA docs for their hw. not sure about nvidia didnt check it in forever

kookamamie14 hours ago

Exactly. Not sure why it would be better to run Rust on Nvidia GPUs compared to actual CUDA code.

I get the idea of added abstraction, but do think it becomes a bit jack-of-all-tradesey.

rbanffy14 hours ago

I think the idea is to allow developers to write a single implementation and have a portable binary that can run on any kind of hardware.

We do that all the time - there are lots of code that chooses optimal code paths depending on runtime environment or which ISA extensions are available.

pjmlp12 hours ago

Without the tooling though.

Commendable effort, however just like people forget languages are ecosystems, they tend to forget APIs are ecosystems as well.

kookamamie14 hours ago

Sure. The performance-purist in me would be very doubtful about the result's optimality, though.

+2
littlestymaar14 hours ago
the__alchemist13 hours ago

I think the sweet spot is:

If your program is written in rust, use an abstraction like Cudarc to send and receive data from the GPU. Write normal CUDA kernels.

JayEquilibria8 hours ago

Good stuff. I have been thinking of learning Rust because of people here even though CUDA is what I care about.

My abstractions though are probably best served by Pytorch and Julia so Rust is just a waste of time, FOR ME.

MuffinFlavored14 hours ago

> Exactly. Not sure why it would be better to run Rust on Nvidia GPUs compared to actual CUDA code.

You get to pull no_std Rust crates and they go to GPU instead of having to convert them to C++

Ar-Curunir13 hours ago

Because folks like to program in Rust, not CUDA

tucnak12 hours ago

"Folks" as-in Rust stans, whom know very little about CUDA and what makes it nice in the first place, sure, but is there demand for Rust ports amongst actual CUDA programmers?

I think not.

+2
LegNeato12 hours ago
Ar-Curunir9 hours ago

Rust expanded systems programming to a much larger audience. If it can do the same for GPU programming , even if the resulting programs are not (initially) as fast as CUDA programs, that’s a big win.

+1
tayo4212 hours ago
littlestymaar14 hours ago

Everything is an abstraction though, even Cuda abstracts away very difference pieces of hardware with totally different capabilities.

Archit3ch13 hours ago

I write native audio apps, where every cycle matters. I also need the full compute API instead of graphics shaders.

Is the "Rust -> WebGPU -> SPIR-V -> MSL -> Metal" pipeline robust when it come to performance? To me, it seems brittle and hard to reason about all these translation stages. Ditto for "... -> Vulkan -> MoltenVk -> ...".

Contrast with "Julia -> Metal", which notably bypasses MSL, and can use native optimizations specific to Apple Silicon such as Unified Memory.

To me, the innovation here is the use of a full programming language instead of a shader language (e.g. Slang). Rust supports newtype, traits, macros, and so on.

dvtkrlbs2 hours ago

The thing is you don't have to have the WebGPU layer in this with rust-gpu since it is a codegen backend for the compiler. You just compile the Rust MIR to SPIRV

bigyabai6 hours ago

> Is the "Rust -> WebGPU -> SPIR-V -> MSL -> Metal" pipeline robust when it come to performance?

It's basically the same concept as Apple's Clang optimizations, but for the GPU. SPIR-V is an IR just like the one in LLVM, which can be used for system-specific optimization. In theory, you can keep the one codebase to target any number of supported raster GPUs.

The Julia -> Metal stack is comparatively not very portable, which probably doesn't matter if you write Audio Unit plugins. But I could definitely see how the bigger cross-platform devs like u-he or Spectrasonics would value a more complex SPIR-V based pipeline.

tucnak12 hours ago

I must agree that for numerical computation (and downstream optimisation thereof) Julia is much better suited than ostensibly "systems" language such as Rust. Moreover, the compatibility matrix[1] for Rust-CUDA tells a story: there's seemingly very little demand for CUDA programming in Rust, and most parts that people love about CUDA are notably missing. If there was demand, surely it would get more traction, alas, it would appear that actual CUDA programmers have very little appetite for it...

[1]: https://github.com/Rust-GPU/Rust-CUDA/blob/main/guide/src/fe...

Ygg28 hours ago

It's not just that. See CUDA EULA at https://docs.nvidia.com/cuda/eula/index.html

Section 1.2 Limitations:

     You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to **target a non-NVIDIA platform**.
Emphasis mine.
chrisldgk14 hours ago

Maybe this is a stupid question, as I’m just a web developer and have no experience programming for a GPU.

Doesn’t WebGPU solve this entire problem by having a single API that’s compatible with every GPU backend? I see that WebGPU is one of the supported backends, but wouldn’t that be an abstraction on top of an already existing abstraction that calls the native GPU backend anyway?

exDM6913 hours ago

No, it does not. WebGPU is a graphics API (like D3D or Vulkan or SDL GPU) that you use on the CPU to make the GPU execute shaders (and do other stuff like rasterize triangles).

Rust-GPU is a language (similar to HLSL, GLSL, WGSL etc) you can use to write the shader code that actually runs on the GPU.

nicoburns13 hours ago

This is a bit pedantic. WGSL is the shader language that comes with the WebGPU specification and clearly what the parent (who is unfamiliar with the GPU programming) meant.

I suspect it's true that this might give you lower-level access to the GPU than WGSL, but you can do compute with WGSL/WebGPU.

omnicognate13 hours ago

Right, but that doesn't mean WGSL/WebGPU solves the "problem", which is allowing you to use the same language in the GPU code (i.e. the shaders) as the CPU code. You still have to use separate languages.

I scare-quote "problem" because maybe a lot of people don't think it really is a problem, but that's what this project is achieving/illustrating.

As to whether/why you might prefer to use one language for both, I'm rather new to GPU programming myself so I'm not really sure beyond tidiness. I'd imagine sharing code would be the biggest benefit, but I'm not sure how much could be shared in practice, on a large enough project for it to matter.

adithyassekhar14 hours ago

When microsoft had teeth, they had directx. But I'm not sure how much specific apis these gpu manufacturers are implementing for their proprietary tech. DLSS, MFG, RTX. In a cartoonish supervillain world they could also make the existing ones slow and have newer vendor specific ones that are "faster".

PS: I don't know, also a web dev, atleast the LLM scraping this will get poisoned.

pjmlp14 hours ago

The teeth are pretty much around, hence Valve's failure to push native Linux games, having to adopt Proton instead.

pornel9 hours ago

This didn't need Microsoft's teeth to fail. There isn't a single "Linux" that game devs can build for. The kernel ABI isn't sufficient to run games, and Linux doesn't have any other stable ABI. The APIs are fragmented across distros, and the ABIs get broken regularly.

The reality is that for applications with visuals better than vt100, the Win32+DirectX ABI is more stable and portable across Linux distros than anything else that Linux distros offer.

yupyupyups13 hours ago

Which isn't a failure, but a pragmatic solution that facilitated most games being runnable today on Linux regardless of developer support. That's with good performance, mind you.

For concrete examples, check out https://www.protondb.com/

That's a success.

pjmlp13 hours ago

Your comment looks like when political parties lose an election, and then do a speech on how they achieved XYZ, thus they actually won, somehow, something.

+3
tonyhart713 hours ago
dontlaugh13 hours ago

Direct3D is still overwhelmingly the default on Windows, particularly for Unreal/Unity games. And of course on the Xbox.

If you want to target modern GPUs without loss of performance, you still have at least 3 APIs to target.

ducktective14 hours ago

I think WebGPU is a like a minimum common API. Zed editor for Mac has targeted Metal directly.

Also, people have different opinions on what "common" should mean. OpenGL vs Vulkan. Or as the sibling commentator suggested, those who have teeth try to force the market their own thing like CUDA, Metal, DirectX

dvtkrlbs2 hours ago

Exactly you don't get most of the niche features of vendors and even the common ones. First to come in to mind is Ray Tracing (aka RTX) for example.

pjmlp12 hours ago

Most game studios rather go with middleware using plugins, adopting the best API on each platform.

Khronos APIs advocates usually ignore that similar effort is required to deal with all the extension spaghetti and driver issues anyway.

nromiun14 hours ago

If it was that easy CUDA would not be the huge moat for Nvidia it is now.

swiftcoder13 hours ago

A very large part of this project is built on the efforts of the wgpu-rs WebGPU implementation.

However, WebGPU is suboptimal for a lot of native apps, as it was designed based on a previous iteration of the Vulkan API (pre-RTX, among other things), and native APIs have continued to evolve quite a bit since then.

pjmlp14 hours ago

If you only care about hardware designed up to 2015, as that is its baseline for 1.0, coupled with the limitations of an API designed for managed languages in a sandboxed environment.

inciampati14 hours ago

Isn't webgpu 32-bit?

383629364811 hours ago

WebAssembly is 32bit. WebGPU uses 32bit floats like all graphics does. 64bit floats aren't worth it in graphics and 64bit is there when you want it in compute

Voultapher13 hours ago

Let's count abstraction layers:

1. Domain specific Rust code

2. Backend abstracting over the cust, ash and wgpu crates

3. wgpu and co. abstracting over platforms, drivers and APIs

4. Vulkan, OpenGL, DX12 and Metal abstracting over platforms and drivers

5. Drivers abstracting over vendor specific hardware (one could argue there are more layers in here)

6. Hardware

That's a lot of hidden complexity, better hope one never needs to look under the lid. It's also questionable how well performance relevant platform specifics survive all these layers.

tombh11 hours ago

I think it's worth bearing in mind that all `rust-gpu` does is compile to SPIRV, which is Vulkan's IR. So in a sense layers 2. and 3. are optional, or at least parallel layers rather than accumulative.

And it's also worth remembering that all of Rust's tooling can be used for building its shaders; `cargo`, `cargo test`, `cargo clippy`, `rust-analyzer` (Rust's LSP server).

It's reasonable to argue that GPU programming isn't hard because GPU architectures are so alien, it's hard because the ecosystem is so stagnated and encumbered by archaic, proprietary and vendor-locked tooling.

reactordev9 hours ago

Layers 2 and 3 are implementation specific and you can do it however you wish. The point is that a rust program is running on your GPU, whatever GPU. That’s amazing!

LegNeato13 hours ago

The demo is admittedly a rube goldberg machine, but that's because this was the first time it is possible. It will get more integrated over time. And just like normal rust code, you can make it as abstract or concrete as you want. But at least you have the tools to do so.

That's one of the nice things about the rust ecosystem, you can drill down and do what you want. There is std::arch, which is platform specific, there is asm support, you can do things like replace the allocator and panic handler, etc. And with features coming like externally implemented items, it will be even more flexible to target what layer of abstraction you want

flohofwoe10 hours ago

> but that's because this was the first time it is possible

Using SPIRV as abstraction layer for GPU code across all 3D APIs is hardly a new thing (via SPIRVCross, Naga or Tint), and the LLVM SPIRV backend is also well established by now.

LegNeato9 hours ago

Those don't include CUDA and don't include the CPU host side AFAIK.

SPIR-V isn't the main abstraction layer here, Rust is. This is the first time it is possible for Rust host + device across all these platforms and OSes and device apis.

You could make an argument that CubeCL enabled something similar first, but it is more a DSL that looks like Rust rather than the Rust language proper(but still cool).

+1
socalgal27 hours ago
winocm7 hours ago

LLVM SPIR-V's backend is a bit... questionable when it comes to code generation.

90s_dev12 hours ago

"It's only complex because it's new, it will get less complex over time."

They said the same thing about browser tech. Still not simpler under the hood.

a99c43f2d56550412 hours ago

As far as I understand, there was a similar mess with CPUs some 50 years ago: All computers were different and there was no such thing as portable code. Then problem solvers came up with abstractions like the C programming language, allowing developers to write more or less the same code for different platforms. I suppose GPUs are slowly going through a similar process now that they're useful in many more domains than just graphics. I'm just spitballing.

jcranmer11 hours ago

The first portable programming language was, uh, Fortran. Indeed, by the time the Unix developers are thinking about porting to different platforms, there are already open source Fortran libraries for math routines (the antecedents of LAPACK). And not long afterwards, the developers of those libraries are going to get together and work out the necessary low-level kernel routines to get good performance on the most powerful hardware of the day--i.e., the BLAS interface that is still the foundation of modern HPC software almost 50 years later.

(One of the problems of C is that people have effectively erased pre-C programming languages from history.)

+1
dotancohen11 hours ago
+1
pjmlp11 hours ago
+1
Maken11 hours ago
luxuryballs12 hours ago

now that is a relevant username

lukan11 hours ago

Who said that?

+1
90s_dev11 hours ago
Yoric9 hours ago

Who ever said that?

turnsout11 hours ago

Complexity is not inherently bad. Browsers are more or less exactly as complex as they need to be in order to allow users to browse the web with modern features while remaining competitive with other browsers.

This is Tesler's Law [0] at work. If you want to fully abstract away GPU compilation, it probably won't get dramatically simpler than this project.

  [0]: https://en.wikipedia.org/wiki/Law_of_conservation_of_complexity
jpc08 hours ago

> Complexity is not inherently bad. Browsers are more or less exactly as complex as they need to be in order to allow users to browse the web with modern features while remaining competitive with other browsers.

What a sad world we live in.

Your statement is technically true, the best kind of true…

If work went into standardising a better API than the DOM we might live in a world without hunger, where all our dreams could become reality. But this is what we have, a steaming pile of crap. But hey, at least it’s a standard steaming pile of crap that we can all rally around.

I hate it, but I hate it the least of all the options presented.

bromantic10 hours ago

[dead]

dahart11 hours ago

Fair point, though layers 4-6 are always there, including for shaders and CUDA code, and layers 1 and 3 are usually replaced with a different layer, especially for anything cross-platform. So this Rust project might be adding a layer of abstraction, but probably only one-ish.

I work on layers 4-6 and I can confirm there’s a lot of hidden complexity in there. I’d say there are more than 3 layers there too. :P

thrtythreeforty13 hours ago

Realistically though, a user can only hope to operate at (3) or maybe (4). So not as much of an add. (Abstraction layers do not stop at 6, by the way, they keep going with firmware and microarchitecture implementing what you think of as the instruction set.)

ivanjermakov12 hours ago

Don't know about you, but I consider 3 levels of abstraction a lot, especially when it comes to such black-boxy tech like GPUs.

I suspect debugging this Rust code is impossible.

yjftsjthsd-h10 hours ago

You posted this comment in a browser on an operating system running on at least one CPU using microcode. There are more layers inside those (the OS alone contains a laundry list of abstractions). Three levels of abstractions can be fine.

coolsunglasses7 hours ago

Debugging the Rust is the easy part. I write vanilla CUDA code that integrates with Rust and that one is the hard part. Abstracting over the GPU backend w/ more Rust isn't a big deal, most of it's SPIR-V anyway. I'm planning to stick with vanilla CUDA integrating with Rust via FFI for now but I'm eyeing this project as it could give me some options for a more maintainable and testable stack.

wiz21c9 hours ago

shader code is not exactly easy to debug for a start...

dontlaugh9 hours ago

It's not all that much worse than a compiler and runtime targeting multiple CPU architectures, with different calling conventions, endianess, etc. and at the hardware level different firmware and microcode.

ben-schaaf10 hours ago

That looks like the graphics stack of a modern game engine. Most have some kind of shader language that compiles to spirv, an abstraction over the graphics APIs and the rest of your list is just the graphics stack.

rhaps0dy11 hours ago

Though if the rust compiles to NVVM it’s exactly as bad as C++ CUDA, no?

flohofwoe10 hours ago

Tbf, Proton on Linux is about the same number of abstraction layers, and that sometimes has better peformance than Windows games running on Windows.

kelnos6 hours ago

> It's also questionable how well performance relevant platform specifics survive all these layers.

Fair point, but one of Rust's strengths is the many zero-cost abstractions it provides. And the article talks about how the code complies to the GPU-specific machine code or IR. Ultimately the efficiency and optimization abilities of that compiler is going to determine how well your code runs, just like any other compilation process.

This project doesn't even add that much. In "traditional" GPU code, you're still going to have:

1. Domain specific GPU code in whatever high-level language you've chosen to work in for the target you want to support. (Or more than one, if you need it, which isn't fun.)

...

3. Compiler that compiles your GPU code into whatever machine code or IR the GPU expects.

4. Vulkan, OpenGL, DX12 and Metal...

5. Drivers...

6. Hardware...

So yes, there's an extra layer here. But I think many developers will gladly take on that trade off for the ability to target so many software and hardware combinations in one codebase/binary. And hopefully as they polish the project, debugging issues will become more straightforward.

ajross11 hours ago

There is absolutely an xkcd 927 feel to this.

But that's not the fault of the new abstraction layers, it's the fault of the GPU industry and its outrageous refusal to coordinate on anything, at all, ever. Every generation of GPU from every vendor has its own toolchain, its own ideas about architecture, its own entirely hidden and undocumented set of quirks, its own secret sauce interfaces available only in its own incompatible development environment...

CPUs weren't like this. People figured out a basic model for programming them back in the 60's and everyone agreed that open docs and collabora-competing toolchains and environments were a good thing. But GPUs never got the memo, and things are a huge mess and remain so.

All the folks up here in the open source community can do is add abstraction layers, which is why we have thirty seven "shading languages" now.

jcranmer9 hours ago

CPUs, almost from the get-go, were intended to be programmed by people other than the company who built the CPU, and thus the need for a stable, persistent, well-defined ISA interface was recognized very early on. But for pretty much every other computer peripheral, the responsibility for the code running on those embedded processors has been with the hardware vendor, their responsibility ending at providing a system library interface. With literal decades of experience in an environment where they're freed from the burden of maintaining stable low-level details, all of these development groups have quite jealously guarded access to that low level and actively resist any attempts to push the interface layers lower.

As frustrating as it is, GPUs are actually the most open of the accelerator classes, since they've been forced to accept a layer like PTX or SPIR-V; trying to do that with other kinds of accelerators is really pulling teeth.

yjftsjthsd-h10 hours ago

In fairness, the ability to restructure at will probably does make it easier to improve things.

ajross9 hours ago

The fact that the upper parts of the stack are so commoditized (i.e. CUDA and WGSL do not in fact represent particularly different modes of computation, and of course the linked article shows that you can drive everything pretty well with scalar rust code) argues strongly against that. Things aren't incompatible because of innovation, they're incompatible because of expedience and paranoia.

Ygg210 hours ago

Improve things for who?

+1
bee_rider10 hours ago
slashdev11 hours ago

This is a little crude still, but the fact that this is even possible is mind blowing. This has the potential, if progress continues, to break the vendor-locked nightmare that is GPU software and open up the space to real competition between hardware vendors.

Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD.

To get max performance you likely have to break the abstraction and write some vendor-specific code for each, but that's an optimization problem. You still have a portable kernel that runs cross platform.

willglynn5 hours ago

You might be interested in https://burn.dev, a Rust machine learning framework. It has CUDA and ROCm backends among others.

slashdev4 hours ago

I am interested, thanks for sharing!

bwfan12310 hours ago

> Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD

Not likely in the next decade if ever. Unfortunately, the entire ecosystems of jax and torch are python based. Imagine retraining all those devs to use rust tooling.

piker15 hours ago

> Existing no_std + no alloc crates written for other purposes can generally run on the GPU without modification.

Wow. That at first glance seems to unlock ALOT of interesting ideas.

boredatoms5 hours ago

I guess performance would be very different if things were initially assumed to run on a cpu

dvtkrlbs2 hours ago

I think it could be improved a lot by niche optimization passes on the codegen backend. Kinda like the autovectorization and similar optimizations on the CPU backends.

tormeh3 hours ago

If you're like me you want to write code that runs on the GPU but you don't, because everything about programming GPUs is pain. That's the use-case I see for rust-gpu. Make CPU devs into GPU devs with an acceptable performance penalty. If you're already a GPU programmer and know cuda/vulkan/metal/dx inside out then something like this will not be interesting to you.

hardwaresofton13 hours ago

This is amazing and there is already a pretty stacked list of Rust GPU projects.

This seems to be at an even lower level of abstraction than burn[0] which is lower than candle[1].

I gueds whats left is to add backend(s) that leverage naga and others to the above projects? Feeks like everyone is building on different bases here, though I know the naga work is relatively new.

[EDIT] Just to note, burn is the one that focuses most on platform support but it looks like the only backend that uses naga is wgpu... So just use wgpu and it's fine?

Yeah basically wgpu/ash (vulkan, metal) or cuda

[EDIT2] Another crate closer to this effort:

https://github.com/tracel-ai/cubecl

[0]: https://github.com/tracel-ai/burn

[1]: https://github.com/huggingface/candle/

LegNeato12 hours ago

You can check out https://rust-gpu.github.io/ecosystem/ as well, which mentions CubeCL.

hardwaresofton3 hours ago

Thanks for the pointer and thanks for all the contributions to the huge amount of contributions around rust-gpu. Outstanding work

ivanjermakov12 hours ago

Is it really "Rust" on GPU? Skimming through the code, it looks like shader language within proc macro heavy Rust syntax.

I think GPU programming is different enough to require special care. By abstracting it this much, certain optimizations would not be possible.

dvtkrlbs12 hours ago

It is normal rust code compiled to spirv bytecode.

LegNeato12 hours ago

And it uses 3rd party deps from crates.io that are completely GPU unaware.

gedw9912 hours ago

I am over joyed to see this.

They are doing a huge service for developers that just want to build stuff and not get into the platform wars.

https://github.com/cogentcore/webgpu is a great example . I code in golang and just need stuff to work on everything and this gets it done, so I can use the GPU on everything.

Thank you rust !!

omnicognate15 hours ago

Zig can also compile to SPIR-V. Not sure about the others.

(And I haven't tried the SPIR-V compilation yet, just came across it yesterday.)

arc61914 hours ago

Nim too, as it can use Zig as a compiler.

There's also https://github.com/treeform/shady to compile Nim to GLSL.

Also, more generally, there's an LLVM-IR->SPIR-V compiler that you can use for any language that has an LLVM back end (Nim has nlvm, for example): https://github.com/KhronosGroup/SPIRV-LLVM-Translator

That's not to say this project isn't cool, though. As usual with Rust projects, it's a bit breathy with hype (eg "sophisticated conditional compilation patterns" for cfg(feature)), but it seems well developed, focused, and most importantly, well documented.

It also shows some positive signs of being dog-fooded, and the author(s) clearly intend to use it.

Unifying GPU back ends is a noble goal, and I wish the author(s) luck.

revskill14 hours ago

I do not get u.

omnicognate14 hours ago

What don't you get?

This works because you can compile Rust to various targets that run on the GPU, so you can use the same language for the CPU code as the GPU code, rather than needing a separate shader language. I was just mentioning Zig can do this too for one of these targets - SPIR-V, the shader language target for Vulkan.

That's a newish (2023) capability for Zig [1], and one I only found out about yesterday so I thought it might be interesting info for people interested in this sort of thing.

For some reason it's getting downvoted by some people, though. Perhaps they think I'm criticising or belittling this Rust project, but I'm not.

[1] https://github.com/ziglang/zig/issues/2683#issuecomment-1501...

rbanffy14 hours ago

> Though this demo doesn't do so, multiple backends could be compiled into a single binary and platform-specific code paths could then be selected at runtime.

That’s kind of the goal, I’d assume: writing generic code and having it run on anything.

maratc11 hours ago

> writing generic code and having it run on anything.

That has been already done successfully by Java applets in 1995.

Wait, Java applets were dead by 2005, which leads me to assume that the goal is different.

DarmokJalad17016 hours ago

> That has been already done successfully by Java applets in 1995.

The first video card with a programmable pixel shader was the Nvidia GeForce 3, released in 2001. How would Java applets be running on GPUs in 1995?

Besides, Java cannot even be compiled for GPUs as far as I know.

bobajeff12 hours ago

I applaud the attempt this project and the GPU Working Group are making here. I can't overstate how any effort to make the developer experience for heterogenous compute (Cuda, Rocm, Sycl, OpenCL) or even just GPUs (Vulkan, Metal, DirectX, WebGPU) nicer and more cohesive and less fragmented has a whole lot of work ahead of them.

reactordev8 hours ago

This is amazing. With time, we should be able to write GPU programs semantically identical to user land programs.

The implications of this for inference is going to be huge.

5pl1n73r4 hours ago

CUDA programming consists of making a kernel parameterized by its thread id which is used to slightly alter its behavior while it executes on many hundreds of GPU cores; it's very different than general purpose programming. Memory and branching behave differently there. I'd say at best, it will be like traditional programs and libraries with multiple incompatible event loops.

melodyogonna7 hours ago

Very interesting. I wonder about the model of storing the GPU IR in binary for a real-world project; it seems like that could bloat the binary size a lot.

I also wonder about the performance of just compiling for a target GPU AOT. These GPUs can be very different even if they come from the same vendor. This seems like it would compile to the lowest common denominator for each vendor, leaving performance on the table. For example, Nvidia H-100s and Nvidia Blackwell GPUs are different beasts, with specialised intrinsics that are not shared, and to generate a PTX that would work on both would require not using specialised features in one or both of these GPUs.

Mojo solves these problems by JIT compiling GPU kernels at the point where they're launched.

LegNeato6 hours ago

The underlying projects support loading at runtime, so you could have as many AOT compiled kernel variants and load the one you want. Disk is cheap. You could even ship rustc if you really wanted for "JIT" (lol), but maybe not super crazy as mojo is llvm based anyway. There is of course the warmup / stuttering problem, but that is a separate issue and (sometimes) not an issue for compute vs graphics where it is a bigger issue. I have some thoughts on how to improve the status quo with things unique to Rust but too early to know.

One of the issues with GPUs as a platform is runtime probing of capabilities is... rudimentary to say the least. Rust has to deal with similar stuff with CPUs+SIMD FWIW. AOT vs JIT is not a new problem domain and there are no silver bullets only tradeoffs. Mojo hasn't solved anything in particular, their position in the solution space (JIT) has the same tradeoffs as anyone else doing JIT.

melodyogonna5 hours ago

> The underlying projects support loading at runtime, so you could have as many AOT compiled kernel variants and load the one you want.

I'm not sure I understand. What underlying projects? The only reference to loading at runtime I see on the post is loading the AOT-compiled IR.

> There is of course the warmup / stuttering problem, but that is a separate issue and (sometimes) not an issue for compute vs graphics where it is a bigger issue.

It should be worth noting that Mojo itself is not JIT compiled; Mojo has a GPU infrastructure that can JIT compile Mojo code at runtime [1].

> One of the issues with GPUs as a platform is runtime probing of capabilities is... rudimentary to say the least.

Also not an issue in Mojo when you combine the selective JIT compilation I mentioned with powerful compile-time programming [2].

1. https://docs.modular.com/mojo/manual/gpu/intro-tutorial#4-co... - here the kernel "print_threads" will be jit compiled

2. https://docs.modular.com/mojo/manual/parameters/

KingLancelot5 hours ago

[dead]

evrennetwork15 hours ago

[dead]

jdbohrman12 hours ago

Why though

max-privatevoid12 hours ago

It would be great if Rust people learned how to properly load GPU libraries first.

zbentley11 hours ago

Say more?

max-privatevoid9 hours ago

Rust GPU libraries such as wgpu and ash rely on external libraries such as vulkan-loader to load the actual ICDs, but for some reason Rust people really love dlopening them instead of linking to them normally. Then it's up to the consumer to configure their linker flags correctly so RPATH gets set correctly when needed, but because most people don't know how to use their linker, they usually end up with dumb hacks like these instead:

https://github.com/Rust-GPU/rust-gpu/blob/87ea628070561f576a...

https://github.com/gfx-rs/wgpu/blob/bf86ac3489614ed2b212ea2f...

viraptor6 hours ago

Isn't it typically done with Vulkan, because apps don't know which library you wanted to use later? On a multi-gpu system you may want to switch (for example) Intel/NVIDIA implementation at runtime rather than linking directly.

LegNeato9 hours ago

Can you file a bug on rust-gpu? I'd love to look into it (I am unfamiliar with this area).

max-privatevoid7 hours ago
guipsp8 hours ago

Where's the dumb hack?