Back

Cuda.jl v3.3: union types, debug info, graph APIs

156 points3 yearsjuliagpu.org
xvilka3 years ago

I wish more attention would be towards open source alternatives for CUDA, such as AMD's ROCm[1][2] and Julia framework using it - AMDGPU.jl[3]. It's sad to see so many people praise NVIDIA which is the enemy of open source, openly hostile to anything except their oversized proprietary binary blobs.

[1] https://rocmdocs.amd.com/en/latest/index.html

[2] https://github.com/RadeonOpenCompute/ROCm

[3] https://github.com/JuliaGPU/AMDGPU.jl

jpsamaroo3 years ago

AMD has done great work in a very short amount of time, but let's not forget that they're still very new to the GPU compute game. The ROCm stack is overall still pretty buggy, and definitely hard to build in ways other than what AMD deems officially supported.

As AMDGPU.jl's maintainer, I do certainly appreciate more users using AMDGPU.jl if they have the ability to, but I don't want people to think that it's anywhere close in terms of maturity, overall performance, and feature-richness compared to CUDA.jl. If you already have access to an NVIDIA GPU, it's painless to setup and should work really well for basically anything you want to with it. I can't say the same about AMDGPU.jl right now (although we are definitely getting there).

zamalek3 years ago

Genuine question: why can't Vulkan Compute be used instead of CUDA and ROCm?

Julia has all the tools required to "magically" transform code into SPIR-V kernels. Couldn't `:()` syntax be used to create kernels?

eigenspace3 years ago
snicker73 years ago

oneApi.jl has a Julia to SPIR-V (Vulkan) compiler.

pjmlp3 years ago

This is not a charity.

If the competition to CUDA wants to be taken serioulsy then provide the tooling and polyglot support to do so.

OpenCL was stuck in their "C only with printf debugging" mentality for too long, now it is too late.

AMD ROCm still isn't available in Windows.

If I learned anything from wasting my time with Khronos stuff, was that I should have switched to DirectX much earlier.

krapht3 years ago

Most people hack on this stuff for work, and time is money. OpenCL is just a lot less productive than CUDA for most tasks. The NVidia price premium isn't big enough to make people switch over.

andi9993 years ago

Agree, been doing some cuda as part of my job since 2009, and it works very smooth, you could write non trivial programs after less than one week of learning (if you know C before). When opencl got a little hype, I had a look, I didn't manage to run a simple example, and I was asking myself: do I really want to have to use all this boilerplate code? Also at least a while back there is no good fft outside of cuda, maybe that changed though.

mixedCase3 years ago

Maybe if AMD starts caring about ROCm, users might. To this day Navi and newer cards are unsupported.

kwertzzz3 years ago

This is really the main problem. As far as I know, ROCm requires a quite expensive GPU for data centers (if you want to have a current GPU) which makes it quite difficult to build a community around ROCm.

dvfjsdhgfv3 years ago

I would love to, however for practically all tasks I need only the CUDA backend is available, not ROCm - an that is of course bad not just for open source, but also for the owners of AMD GPUs. The popularity of Nvidia's solution is so overwhelming AMD would need to do something revolutionary in order to catch up (supporting Windows and macOS wouldn't hurt either).

sebow3 years ago

AMD has the capacity now to make something equivalent and at least on-par.(Remember that they're way more open to open projects than Nvidia, at least on Linux)

They just have to fix their documentation and release plans, their roadmap.ROCm looks decent-enough on paper, but when you try to install it and get started, suddenly you find more appealing spending hundreds of dollars more on locked-down hardware and proprietary libraries.

up6w63 years ago

A fun fact is that the GPUCompiler, which compiles the code to run in GPU's, is the current way to generate binaries without hiding the whole ~200mb of julia runtime in the binary.

https://github.com/JuliaGPU/GPUCompiler.jl/ https://github.com/tshort/StaticCompiler.jl/

snicker73 years ago

AKA Julia does GPU better than CPU.

aardvarkr3 years ago

Who knew that Julia could do CUDA work too? Every day I grow more and more impressed with the language.

eigenspace3 years ago

It also happens to be one of the most easy and reliable ways I know of to install CUDA on your machine. Everything is handled through the artifact system so you don't have to mess with downloading it yourself and making sure you have the right versions and such.

(Before someone complains, you can also opt out of this and direct the library to a version you installed yourself)

krastanov3 years ago

Could you elaborate a bit on this? I know that a gigantic bunch of libraries are involved that is usually terrible to install but Julia does it for you in the equivalent of a python virtual-env. However, aren't there also Linux kernel components that are necessary? How are those installed?

KenoFischer3 years ago

Yes, there's a kernel component which needs to be installed, but that's usually pretty easy these days, because it's usually one of

1) You're using a container-ish environment where the host kernel has the CUDA drivers installed anyway (but your base container image probably doesn't have the userspace libraries)

2) The kernel driver comes with your OS distribution, but the userspace libraries are outdated (userspace libraries here includes things like JIT compilers, which have lots of bugs and need frequent updates) or don't have some of the optional components that have restrictive redistribution clauses

3) Your sysadmin installed everything, but then helpfully moved the CUDA libraries into some obscure system specific directory where no software can find it.

4) You need to install the kernel driver yourself, so you find it on the NVIDIA website, but don't realize there's another 5 separate installers you need for all the optional libraries.

5) Maybe you have the NVIDIA-provided libraries, but then you need to figure out how to get the third-party libraries that depend on them installed. Given the variety of ways to install CUDA, this is a pretty hard problem to solve for other ecosystems.

In Julia, as long as you have the kernel driver, everything else will get automatically set up and installed for you. As a result, people are usually up and running with GPUs in a few minutes in Julia.

ChrisRackauckas3 years ago

CUDA.jl installs all of the CUDA drivers and associated libraries like cudnn for you if you don't have them. Those are all vendered via the Yggdrasil system so that users don't have to deal with it.

+1
krastanov3 years ago
version_five3 years ago

I last experimented with CUDA.jl a year ago, and it was very useable then. This is a good reminder to re-evaluate the Julia deep learning ecosystem. If I were working for myself I would definitely try to do more with Julia (for machine learning). Realistically, python has such an established base that it will take some time to get orgs that are already all in on python to come over.

dnautics3 years ago

I think it's not dumb to target Greenfield users: just installing python gpu wheels is often difficult enough that several companies exist (indirectly) because it's so difficult to do right (e.g. selling a gpu PC with that stuff preinstalled)

queuebert3 years ago

I just finished setting up a new machine to run some Kaggle stuff. Both Tensorflow and PyTorch had issues with CUDA versions and dependencies that weren't immediately fixed with a clean virtualenv, while both Knet.jl and Flux.jl installed flawlessly.

wdroz3 years ago

For Pytorch and Tensorflow, you can use conda to install them with the right CUDA and cudnn versions.

+1
kwertzzz3 years ago
pabs33 years ago

Are there similar things for other types of GPUs?

Edit: the site has one project per GPU type, shame there isn't one interface that works with every GPU type instead.

eigenspace3 years ago

https://github.com/JuliaGPU/AMDGPU.jl

https://github.com/JuliaGPU/oneAPI.jl

These are both less mature than CUDA.jl, but are in active development.

> Edit: the site has one project per GPU type, shame there isn't one interface that works with every GPU type instead.

That would be https://juliagpu.github.io/KernelAbstractions.jl

jpsamaroo3 years ago

For kernel programming, https://github.com/JuliaGPU/KernelAbstractions.jl (shortened to KA) is what the JuliaGPU team has been developing as a unified programming interface for GPUs of any flavor. It's not significantly different from the (basically identical) interfaces exposed by CUDA.jl and AMDGPU.jl, so it's easy to transition to. I think the event system in KA is also far superior to CUDA's native synchronization system, since it allows one to easily express graphs of dependencies between kernels and data transfers.

krastanov3 years ago

These libraries provide the same API, so from a user perspective, as long as you do not need low-level access, it does not matter what your GPU is.

However, the low-level library for AMD GPUs is more of an alpha quality in Julia.

Karrot_Kream3 years ago

Really looking forward to Turing.jl gaining CUDA support