Back

Nvidia releases NVLM 1.0 72B open weight model

167 points5 monthshuggingface.co
imjonse5 months ago

It is a family of multimodal models based on pretrained Qwen2-72B-Instruct LLM and InterViT vision encoder. There are three variants differentiated by the way the vision tokens are used: decoder-only (like the majority of existing VLM), using cross-attention, and a hybrid. Only the first seems to be on huggingface at the moment.

Also they seem to only train on publically available data, concluding that quality is more important than scale.

keyboardsamurai5 months ago

It has a non-commercial cc-by-nc-4.0 license, I would guess the only way to use this in production is to use Nvidias data centers to host it? Or are there other ways?

orlp5 months ago

Not a lawyer, not legal advice, but... the legal status quo is that neural network outputs are not copyrightable. They are currently considered not made by humans nor considered a derivative work from the training material / network weights (assuming it's not regurgitating copyrighted material verbatim).

The cc-by-nc-4.0 license applies to the network weights. The only thing non-commercial about the license is that it restricts how you may reproduce the licensed material:

> reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and

As long as you are not selling the network weights themselves, nothing in the license prevents you from evaluating the neural network for commercial purposes and selling the outputs. In 'production' you will have to directly download the weights from Nvidia themselves (or another 3rd party which is distributing the network weights non-commercially in good faith) though, you can't share the network weights onto your commercial inference server from another one of your commercial deployment servers. Or at least, it gets more dicy there and may be considered commercial reproduction so better avoid it.

For similar reasons you may 3D print a CC-BY-NC model of a tool and use that tool in your commercial workshop, you may use a CC-BY-NC compiler of a language to compile commercial programs, etc.

SonOfLilit5 months ago

Not a lawyer, but work with lawyers a lot, and this type of rules-lawyering doesn't tend to work in the legal profession. Consult a lawyer before trying any of this.

Majromax5 months ago

> The cc-by-nc-4.0 license applies to the network weights.

I'm not even sure if network weights are copyrightable independently of the code and data used to generate them. In my personal (not a lawyer) view, the weights of a neural network are the product of a mechanical transformation process much like a compiler or assembler, and we don't consider a compiled binary to have a copyright independent of its source code.

I still wouldn't notoriously try to violate a purported weights license, mind you, both because it's rude to ignore the authors' wishes and because it would not be fun being used by NVidia or any other deep-pocket AI company.

dindresto5 months ago

First time I read this interpreation regarding CC-BY-NC model weights, are there any sources to back it?

Tepix5 months ago

It's an interesting question indeed!

Creative Commons themselves write at https://creativecommons.org/faq/#can-i-apply-a-creative-comm... :

"Can I apply a Creative Commons license to software? We recommend against using Creative Commons licenses for software. Instead, we strongly encourage you to use one of the very good software licenses which are already available."

Of course, LLM weights aren't traditional software...

impossiblefork5 months ago

Even selling the network weights shouldn't matter, since there's no copyright.

The problem is if you happen to sign any agreement with NVIDIA in order to get the weights. The problem is whatever contracts you may be bound by.

resource_waste5 months ago

> the legal status quo is that neural network outputs are not copyrightable.

Can't this flip on a dime and a billion dollar company lose billions?

rd425 months ago

I think the only relevant part to note here is that this model showed improved text-only performance after multimodal training. Wonder if this translates to Llama models also ? Is it possible to extend Llama 3.1 405b with multi-modal training to create another SOTA large model ?

reissbaker5 months ago

I think the answer here is "it depends." The Llama-3.2 series is an extended version of the Llama-3.1 series with multimodal (image) training, but they kept the language model weights frozen and only updated the new image weights. So in the end, the 3.2 series benchmarks identically to 3.1 on text-only tasks; the image weights provided no value to the language model weights.

Allowing the language model weights to be updated during training could potentially result in better performance on both tasks, though, if Nvidia's result replicates. I could believe that it might: after all, more diverse data is more diverse data, and the model will be forced during training to generalize more.

imjonse5 months ago

Llama-3-V models do that, but are not published.

optimalsolver5 months ago

Reminder that Nvidia is still the only company making any money out of the "AI revolution".

danpalmer5 months ago

That's natural given that they mostly produce hardware several layers of abstraction distant from the end user value, companies need to buy the hardware before they can start delivering their own value. AI model training is not value by itself if there's no use-case for the model that can be charged for.

I see it playing out one of two ways. Either Nvidia are selling shovels in a gold rush, the rush will end, and the business will dry up (after they have made a lot of money!). Or AI sticks/takes off, and Nvidia are selling a commodity too far from the value, like most electronic component manufacturers, and they'll maintain significant market share but have their margins reduced to a fraction of what they were before (after they made a lot of money!).

The human value doesn't come from ML training or inference, it comes from taking a better photo. The business value comes from drafting a better email. Those companies closer to that value will likely do better in the long run, as they always have done.

Der_Einzige5 months ago

Wrong

Midjourney is profitable. All the acquired startups (i.e. Streamlit or MosaicML) who made millions per employee "made money" for the people who cared.

dartos5 months ago

Midjourney is one, but the others are not. Plenty of people “made money” at Twitter, but the company is a money pit.

OP was likely talking about profitability.

FWIW I wouldn’t really count streamlit as an ai company

saagarjha5 months ago

Twitter was (mildly) profitable.

Bloedcoins5 months ago

I'm pretty sure https://www.topazlabs.com/ is also making money with the AI revolution.

Also Klarna threw out 700 people, they probably make money with AI.

And i found this article: https://www.ft.com/content/a9a192e3-bfbc-461e-a4f3-112e63d0b...

Bloedcoins5 months ago

Its an revolution. Don't undersell this.

There was never ever any technology like LLMs close to what chatgpt and co can do in regards of understanding random human input.

My startup doesn't need to make money with it directly, but for us it increased our data quality on text and images.

I'm also quite happy to pay 10-20$ per month for random things LLMs do quite well for different use cases like creating some scripts etc.

GaggiX5 months ago

That's not true, there are plenty of companies that make a profit, Midjourney, for example, an obvious one.

dartos5 months ago

Are there others?

GaggiX5 months ago

I use NovelAI and that's also profitable. I would be surprised if Elevenlabs wasn't profitable right now.

a21285 months ago

"When there is a gold rush, sell shovels"

amelius5 months ago

They started the gold rush.

jiggawatts5 months ago

I'm pretty sure OpenAI started it, they just used NVIDIA shovels to dig the first mines.

+1
throwaway484765 months ago
Refusing235 months ago

i have yet to hear of anyone actually using AI for something properly

only exception im excited about is the non-main characters from video games, where a lot of the random NPCs, can now actually bring some more fun to the game.

lynx235 months ago

Vision models are a godsent for blind user. I use a vision model to sort my laundry, for instance...

And translation and grammar/spell checking is also at a level which was unthinkable before LLMs hit.

But thats it, really. The "talking machine" aspect of it is more and more uncovered as totally useless.

riffraff5 months ago

> I use a vision model to sort my laundry

you built a robot that sorts laundry? Tell us more!

+2
lynx235 months ago
PeterStuer5 months ago

I run in production a system that uses LLM translation and summerization from hundreds of sources in dozens of languages. Users are extremely satisfied by the results that are far cheaper and far higher quality than what was available before

Filligree5 months ago

Which system is this?

PeterStuer5 months ago

It is an inhouse system in a niche market, not available for sale. I use the OpenAI api for now with very good results, though long term I would prefer to have an on-premise solution if quality and scope (in terms of supported languages) can be maintained. Codewise, as you can imagine, the AI is a very small part of the codebase, but without it the system would be pretty useless.

I think many underestimate the true usefulness the current generation of AI has already achieved because a lot of it is in traditional, boring, bespoke or inhouse LoB systems whereas the press always focuses on public B2C

Bloedcoins5 months ago

I have seen plenty of very good internal AI Demos which we are adding to our products. From GenAI stuff, to image analysis, lightweight agents who answer proper questions.

I used chatgpt 3 days ago to generate a script for me. Saved me probably an hour too.

We use it also in my startup for tasks which we wouldn't even tried without ML models because the quality of old libraries were to bad. Like pdf catalog to text, image classification and segmentation.

tourmalinetaco5 months ago

Claiming no one is using MLMs “properly” despite the various scientific and industrial use cases (vision systems, robots, protein folding, drug simulation, etc) while being “excited” for something as pathetically trivial as a text generator with a text-to-speech tacked on for your mass-produced open world games. Truly peak HN.

jftuga5 months ago

How much GPU RAM would be needed to run this with just one GPU?

reissbaker5 months ago

144GB VRAM to load the weights at FP16, 72GB quantized to FP8. To figure out the KV cache size you'll need for an LLM, you can use the following formula: https://x.com/AlpinDale/status/1841305040545329535

Simplified for posterity:

    kv_bytes = kv_bits / 8
    hidden_per_head = hidden_size // num_attention_heads
    total_heads = hidden_per_head * num_key_value_heads
    kv_bytes_per_token = 2 * kv_bytes * num_hidden_layers * total_heads
(Edit: I accidentally swapped in some of the vision config bytes in my original calculation; these are the corrected numbers.) So, for NVLM 1.0 72B, that works out to 640kb per token assuming FP16 KV cache. If you use the entire 32k context length, that's an extra ~20GB of overhead for the KV cache. Then depending on how you're running the LLM, there might be extra overhead e.g. compiled CUDA graphs.

You can cut this down lower by using grouped query attention as described here: https://medium.com/@plienhar/llm-inference-series-4-kv-cachi... This allows you to divide that number by the number of grouped heads, although it trades off accuracy for VRAM usage.

But TLDR, a minimum of around 164GB of VRAM at full accuracy. To me that seems fairly low, and I think vLLM would OOM without significantly more than that, but that's about as low as you could go in theory if you're running everything at FP16. Half that, of course, for FP8.

You'll typically need to have a copy of the KV cache per GPU, if you're using multiple GPUs, so multiply the KV cache overhead by the number of GPUs you're using. This will depend on what the specs for the GPUs you're using are; for example, you'll need 3 H100s (really four, since vLLM wants the number of heads to be evenly divisible by the number of GPUs); if you're using L40Ses, you'll need eight of them; but most likely only a single AMD MI300x.

paulluuk5 months ago

I haven't tested it, but likely around 170GB, regardless of if you're using only one GPU or spreading it out over several ones.

cjtrowbridge5 months ago

I love how they include a helpful chart that shows this model scores worse than everything else.

kibibu5 months ago

Am I looking at the wrong table? It dominates everything on visual interpretation benchmarks.

Edit: specifically ocrbench and VQAv2

Der_Einzige5 months ago

It's not that bad, and I'd much rather that they be honest instead of lying like everyone else does.

butterfly420695 months ago

All jokes aside (and that did make me laugh) at least they're not training just to hit the benchmarks, which seem to be more meaningless as a quality indicator with each passing day.

miffy9005 months ago

I see at a few models (3 models in MMMU) that score lower than Nvidia's. But putting that aside, they at least get points for apparent objectivity. At least they probably aren't fudging numbers.

GaggiX5 months ago

Well but it actually doesn't, unless you're looking only at MMMU.

dr_kiszonka5 months ago

Exactly. On some benchmarks it is close to or better than GPT 4o.

I wonder if one of the reasons they released it was to respond to OpenAI's plans to enter the chipmaking market.