Back

Implementation of Imagen, Google's text-to-image neural network, in PyTorch

387 points2 yearsgithub.com
tbalsam2 years ago

Good to see LucidRains get the love he (rightly) deserves. He's a beast!

As a thank you to him -- he also does for work for commission/etc, check hus GitHub page for more info. I'm not fiscally or currently otherwise directly linked to him too closely, I've just hung around a while and think he deserves far more credit than he gets. This is literally the smallest piece of the pie of what the man does across several subdisciplines, send him a thank-you please, if possible!

emadm2 years ago

Second this, one of the biggest contributors to open source AI and someone everyone should sponsor.

TekMol2 years ago

What is the reason Google published their research details about Imagen?

Why don't they just keep their findings to themselfes and build products on top of them?

Public companies can't do stuff just for the fun of it, right? So there must be some commercial reasoning behind it?

nmfisher2 years ago

Two reasons: 1) Even though it's all technically very impressive, so far there's not a huge amount of commercialization potential here. OpenAI is charging for its GPT-3 model but its revenue is probably negligible next to the hardware costs (sunk + ongoing) to train it in the first place, let alone the researcher salaries they're paying 2) Most of the stunning examples are cherry-picked. These things fail much more often than they are willing to admit, and they're probably assuming (correctly) that not enough people are willing to pay for something that only-sorta-kinda-works 1/3 of the time, when you're holding it the right way.

adamhowell2 years ago

I'm currently working fulltime on AI-powered design suite Accomplice (https://accomplice.ai) and if you ask me on a good day I would tell you I do think there's already huge commercial potential. On a bad day, though ;)

My current approach is a "model marketplace" (https://accomplice.ai/models) where the most popular open source text-to-image models (VQGAN+CLIP, Disco Diffusion, DALL-E Mega coming soon…), sit alongside the most popular open source style transfer models, and then finally I have the ability for a user to finetune their own models using a simple drag-and-drop tool (https://accomplice.ai/no-code-model-training).

Using this approach a user has enough models to try or train that they can have a higher hit rate. For example, Accomplice currently has finetuned models for photo realistic people (https://accomplice.ai/models/f58bfa91-bb18-406f-a0e1-db00fcf...), watercolor backgrounds (https://accomplice.ai/models/91b8a080-faca-4ff4-8b11-64b0789...), etc…

So theoretically if there were a searchable marketplace of 100s of different finetuned models people could choose from, they would use it much like an iStockPhoto and be able to create the kind of images they want instead of just downloading them.

But it's of course a constant work in progress. Slowly growing though and lots of promising stuff ahead!

nmfisher2 years ago

I meant commercialization potential for companies like Google, where anything less than a hundred million is probably a failure :) Hopefully for the non-Googles of the world (i.e. you), there's a good pathway forward!

TekMol2 years ago

I wanted to try your site, but after clicking on the link in the email, it tries to send me through some redirectingat.com link which is blocked by my ad blocker.

What does that link do?

+1
adamhowell2 years ago
1024core2 years ago

Your headshots model's outputs look creepy. The eyes are off; e.g. the young african girl. You may want to tune your loss functions to fix that.

dragonwriter2 years ago

Most of the blue eyed ones have the Eyes of Ibad from Dune. (And several of the ones that aren't blue eyed also have weird things going on in the sclerae.)

The two “african american” ones look south or maybe southeast asian (and the one of those that is a “young...girl” looks like a, maybe young, adult.)

All the ones without a racial/ethnic prompt are white, and disproportionately blue eyed (again, including sclerae.)

(It indicate “diverse”, and yet all of the examples read white or Asian, though the unlabeled darker-skinned male figure in the group of six at the top is ambiguous enough to be plausibly be something else.)

The “beautiful woman with curly red hair” has rather radical facial asymmetry, and straight to slightly wavy hair.

gbasin2 years ago

Cool project! Will follow along

kingcharles2 years ago

*throws money at his screen*

nautilus122 years ago

Why would people do this when aspiring artists are practically giving their real photos and paintings for free on places like deviant art? Why further commoditize something that's already been commoditize to practically free?

pmoriarty2 years ago

I'm an artist, and I'd absolutely love to use something like this to inspire me or to give me something to continue working with on my own.

In one sense it's kind of like a much "smarter" photoshop filter, where it can make your own art/photos look more like what you want (ex: Van Gogh, Dali, Picasso, or combinations of those, or something completely weird/new/different).

You could also train the models on your own work and have it generate art in your own style that could inspire you or could be useful to you either as a base to work from or that you could take interesting elements from to create new art.

Similar things can be done in music, by the way, and that would be really useful to musicians too.

Poets could use something like this to create poetry, novel writers to write novels, etc..

This is really an improvement on the collaboration potential between humans and computers -- which is probably why it's called "Accomplice".

adamhowell2 years ago

Well, I'm focusing on business use cases – stock images and how Accomplice could be useful for marketing and content creation.

i.e. The ability to easily take your logo and stylize it: https://accomplice.ai/@adam/iterations/2bcc90ad-3237-486a-8d...

Create a photorealistic avatar whenever you need it: https://accomplice.ai/@adam/iterations/988b7d54-dc39-43b1-b5...

Easily remove the background of a photo: https://accomplice.ai/models/97746c4b-c6f0-49cb-ae1b-859716b...

Upscale a photo: https://accomplice.ai/models/bd4619ee-8202-4cf0-a04e-291820f...

Etc etc. AI can make all this stuff easier. And you have a sense of ownership over what you create. All in one place where you can collaborate on all of it with your team. I feel like that's valuable. It's certainly a tool I've always wanted.

But, also, as a bit of an aside – if the Googles and OpenAIs of the world are just going to bite every artist's style anyway with a mostly black box service and training set… it feels like the option for an artist to train/finetune their own model, promote it and possibly make money off of that is worth trying.

urthor2 years ago

It's also because these companies hire away staff all the time.

Rule 1 of working in this field, recruit a mid level member of the other company's research group biannually to get the latest gossip.

It's impossible to keep a 1 page or shorter "algorithm" secret, when the creators are geniuses and they hop jobs every year or so.

Fellas with more IQ than games in a baseball season, they just don't forget.

gfodor2 years ago

Seems exactly false. DALL-E 2 seems to end much of the illustrator industry and if the endless array of Twitter posts from early adopters are any indication it works great.

minimaxir2 years ago

Publishing high impact research gives credibility to the ML teams, which helps recruiting and prestige.

It's less cynical, more incentive alignment.

hackernewds2 years ago

Also good for society. Less evil, more nice people. Respect Google for these traits.

benoketamad2 years ago

Oh, they don't even want to create/release a usable image search... Respect, of course.

+2
Filligree2 years ago
dotnet002 years ago

I think there are three main reasons why:

- being open is kind of just how things in ML generally work right now, it's in stark contrast to things like chemistry or physics where paywalls are pretty common

- it's a matter of clout, ML is moving ridiculously quickly, with work from just 5 years ago being considered outdated in terms of capability, if you don't publish, someone else will and they'll get the credit. This likely also matters for the researchers since they get credit too. In a sense this is just publish or perish culture from academia.

- it's also somewhat about hiring, which is related to the clout. By putting out this kind of research, they're attracting talented engineers to consider working for them. This of course is pretty relevant to the rest of their business, especially given how heavily Google leans on AI to handle moderation.

nl2 years ago

> Public companies can't do stuff just for the fun of it, right? So there must be some commercial reasoning behind it?

Yes they can actually.

If you are a shareholder you can either sue (unlikely to succeed) or vote against the board. That's pretty much the only recourse.

pmoriarty2 years ago

"If you are a shareholder you can either sue (unlikely to succeed) or vote against the board. That's pretty much the only recourse."

Or you could sell or just threaten to sell your shares.

Buying more of the company's shares to take it over is another option.

Good luck doing that with Google.

blueblob2 years ago

I think it is because the proprietary part for them is the data, not the particular algorithm. They benefit more from other people making advances on their technology because they have the data to get more benefit than anyone else. If they kept it to themselves, they would get no "free" advancement. So they trade-off the secret of the technique in the hope that others will advance the technique, making their data more valuable.

curiousgal2 years ago

What would the product be? My experience is that most ML papers are very brittle, for every cool result/example you see there's a plethora of nonsense spit by the models.

pyinstallwoes2 years ago

Absolute power, corrupts absolutely. Perhaps it’s a game theoretic approach to leveling the playing field.

vintermann2 years ago

This is an important question.

You're saying it's Google who have done this research. In a way that's true. But really it is Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet and Mohammad Norouzi who did it, with material support from Google.

It's likely that some or all of these people would have refused to do the work they do if Google kept it all as their secret sauce.

And moreover, there are excellent reasons why they wouldn't want to. It's not just the obvious that if it all were secret, they wouldn't be able to use it in their non-Google career advancement. It's also that research without the freedom to talk is far more difficult and frustrating.

On paper, scientific papers are supposed to document the whole of the discovery/innovation. So you might think that an insider, who got to read all the secret Google research papers AND all the public ones would have an advantage. But problem is, even the best written papers with full code and comments inevitably leave out things, especially of the "why this and not that" type.

If you're a researcher in the free world, you can just ask. Especially if you have a public track record of great papers yourself, they will WANT to talk to you. You can learn so much more from the interactive process of back and forth questions than you can from a static piece of information like a scientific paper.

If you work for a secretive and command-driven organization, you need to be careful about what you reveal of your own research when you ask. You can't talk freely. The thought of having to justify your communication to some old-school IBM lawyer type, is going to chill even the most enthusiastic reseacher. It's easier to just stay in your own corporate bubble, and focus on the things your corporation does well since at least you can talk freely to your colleagues (although in really paranoid organizations like the NSA or old IBM, even that may not be true). But then at best you specialize, at worst you fall behind.

dekhn2 years ago

Google started publishing for several reasons but the primary one was recruitment (showing off was a secondary goal). The mapreduce, GFS, and bigtable papers played an important role in attracting an early generation of distributed computing/high performance computing people from around the valley and CMU/MIT, who helped build the second really successful versions of the web search engine (retrieval and ranking), ads serving (the auction, the logs joining pipeline), etc.

The other reason is that the leaders at Google at the time believed that we would achieve the singularity faster if Jeff Dean periodically sent ideas back 10 years in time to Doug Cutting.

frozenport2 years ago

1. This finding is hard to monetize, especially with ROI that Google typically does (for example a an app that makes $500 a month isn't worth it)

2. Deploying models in a cost effective way is hard

3. Lessons learned from building this model can indeed be monetized and many of them may be kept secret.

toxik2 years ago

It’s simply flag planting. If they don’t do it, some other player will.

yaroslavvb2 years ago

Researchers like to talk about and show off their work outside the company. If you don't let them, they get unhappy and leave.

quickthrower22 years ago

It could be classed as a way to attract talent. It is basically then a very expensive fusball table.

29athrowaway2 years ago

Because the competitive advantage is also in the training datasets and ML infrastructure.

mzs2 years ago

You realize this doesn't include the trained data, right?

DeathArrow2 years ago

They did it out of goodness of their hearts. :)

dang2 years ago

Recent and related:

DALL-E 2 open source implementation - https://news.ycombinator.com/item?id=31228710 - May 2022 (152 comments)

Also:

X-Transformers: A fully-featured transformer with experimental features - https://news.ycombinator.com/item?id=27089208 - May 2021 (37 comments)

Text to Image Generation - https://news.ycombinator.com/item?id=26615791 - March 2021 (88 comments)

bufferoverflow2 years ago

How much would it cost to train something like this? Is there even a good dataset for it?

Yenrabbit2 years ago

There is a dataset of 5 billion image-text pairs (laion-5b) scraped by various parties. This can then be filtered and used to train these models. Cost is a bit of an issue but there are orgs that have provided compute for open model training. And Imagen is nice because the text encoder part is already available and doesn't need more training, so it would just be the diffusion model components being trained. I'd guess we'll see a biggish training run starting in a few weeks.

visarga2 years ago

I hope so. It's a bit cruel to show off and then lock it away.

dharma12 years ago

When you say a bit of an issue, how much are we talking?

Yenrabbit2 years ago

Four or five figures I'd guess? I'm not clued up on costs/performance for TPU stuff to give a better estimate, but guessing at a week on a 256 TPU pod, call it $30k?

+2
Voloskaya2 years ago
webmaven2 years ago

Are there prefiltered derivatives of Laion-5B available? I can imagine various contraindicated categories you might want to avoid entirely, as well as biases you might want to adjust for by balancing classes in the data (5 billion images gives you a lot of room to balance the dataset).

srcreigh2 years ago

Who will do the training? Will the training results be made available publicly? why wont it start until a few weeks from now?

dirtyid2 years ago

I feel like the fact big porn hasn't poached talent and jumped all over this suggest at least 10s of millions. That said some for profit no-rules deepfake service for disinformation and illegal content has to be in the works.

tomatowurst2 years ago

There's a company in Montreal that makes that in a month and also has access to copious amount of said datasets on their servers. It may or may not be that they already have engineers on it. We have no way of knowing since its a private company

jonathankoren2 years ago

Of course the implementation isn’t the issue. It’s the training data and the compute machines.

Open source is pretty meaningless here

jfoster2 years ago

In fact, this kind of reverses things, doesn't it?

Open source is built on the assumption that you can do more with source code than with binaries. In the case of AI models, the computed weights of models are what's valuable, and the source code used to achieve them is less useful.

aero-glide22 years ago

How much would it cost in training to match dall-e 2?

abrichr2 years ago

https://twitter.com/alexjc/status/1347458546636619778?s=21&t...

> The blog post says 256 GPUs for 2 weeks, so:

> DALL·E would cost $131,604 to train on AWS, assuming a p3.16x-large at market rates. Could be as low as $40k if you already paid for reserved instances.

emadm2 years ago

Images actually used our open LAION 400m dataset plus 400m of their own.

On compute there is more than enough compute available to open source now via LAION and Eleuther AI to train these models, will just a bit of time.

29athrowaway2 years ago

I guess the stock photos industry will be disrupted by this.

quickthrower22 years ago

r/wtfstockphotos will be full of this

And medium posts about React hooks and Go generics are gonna be full of “Animated Drake meme but with Rick Astley” kind of fun.

karmasimida2 years ago

How big is getty? I think this is a billion dollars business.

benstrumental2 years ago

Definitely. With the right training set, video game assets too. Imagine being able to generate N variations of any asset in any video game style...

seydor2 years ago

They can make more profit if they don't need to pay artists

srvmshr2 years ago

Jesus! Most of us have their Github contributions map look like Midwest cornfields. LucidRain's looks like downtown Manhattan

https://skyline.github.com/lucidrains/2021

throwaway122452 years ago

I did the pip installs and installed Cuda. I changed prompt and ran the sample code. It ran to completion. How do I save the image from trainer.sample?

visarga2 years ago

You'd have to train it first.

eminence322 years ago

I wonder if this implementation was trained on the same dataset that trained DALL-E2, would it produce results of comparable quality.

tomatowurst2 years ago

I have to clarify I am visioning DALL-E-2-PORN.

Couldn't you just scrape porn to get copious amount of dataset? Scope would be narrower and thus require less classification and in general it has common themes.

I'm more concerned how expensive it will be to train it on GPU instances. We are looking at A6000s right? That's like $15/hr.

ghotli2 years ago

"just"

All the complexity wrapped up into one word.

tomatowurst2 years ago

You do realize web scraping has gotten very cheap at scale and easy now too. It's an after thought for me, I'm more concerned with the economics of training, it can't be cheap

visarga2 years ago

You don't realize how hard it is to make an inclusive, cleaned up dataset. Take a look at this Notion from BigScience detailing their workgroups. Three of them are related to preparing the dataset.

https://bigscience.notion.site/10743770aae24ff3bdc1b938cf454...

And this is just for a text-only scrape.

+1
dotnet002 years ago
+1
minimaxir2 years ago
+3
mpalmer2 years ago
Karawebnetwork2 years ago

Sure, you could use images you do not have the legal rights to if you do not release anything at the end / just leak the final result. But this is a lot of effort and resources invested for something that would effectively never be shared on the clear web.

tomatowurst2 years ago

What's it going to take to train this on porn? This is something that can be crowdfunded.

Look deepnude is a thing and somebody is making money off it: https://app.deepnude.cc/upload

chirau2 years ago

Lol. Not the same thing obviously but your question just reminded me of @robotpornaddict, a neural net that watches porn and tries to describe what it sees.

https://twitter.com/robotpornaddict

hackernewds2 years ago

They specifically mention in the Images documentation that one reason they haven't publicized this yet is due to porn or deep fakes risk. Assuming this works very well already.

DeathArrow2 years ago

And what is going to take to generate videos, not pictures?

tomatowurst2 years ago

To run we must learn to walk first. To walk we must learn to be erect. To be erect we must master crawling.

I don't think the leap is too crazy if we are talking short moving pictures without sound. However, when sound gets involved, this is where it would become very tricky.

pmoriarty2 years ago

"when sound gets involved, this is where it would become very tricky"

I don't think a neural net would have much trouble generating moans in sync to the motion.

Why pay humans for all that fake moaning when an AI could do it?

tomatowurst2 years ago

actually you are right, sound generation here wouldn't be as hard as I originally thought.

comex2 years ago
a28002762 years ago

Maybe if you study hard enough, one day you'll have what it takes to make a real girl.

https://youtu.be/9qd04u2Yj44

londons_explore2 years ago

This tech is actually pretty bad at human faces - it all appears unnatural and distorted. You get things like the occasional extra earlobe...

Al-Khwarizmi2 years ago

Human faces were excluded on purpose from the DALL-E 2 training set in order to prevent misuse. I suppose the same will be the case here (or at least, in public-facing versions of the models).

Given how it renders dog faces, I don't see why it wouldn't be good at human faces too, if trained for it.

jstummbillig2 years ago

While the "it will never do as well as humans at this one thing I feel strongly about" bias is still a common one, even on hn, at this point I am fairly certain that we will soon all have to live with the fact that we are no longer all that special in regards to, well, absolutely everything.

One of my more interesting realisations over this development is that being human is apparently a religion to a lot of otherwise secular humans.

pmoriarty2 years ago

"we are no longer all that special in regards to, well, absolutely everything"

If/when general AI comes about, maybe so.

Until then all we've got are a bunch of highly specialized tools/helpers/slaves that may be good (in some sense) at one thing and awful at pretty much everything else.

You could argue that humans themselves are an ensemble of such highly specialized parts that are more than the sum of their parts in some ineffable way that's more of a "we know it when we see it" than something that's formalizable. Machines/computers lack that... so far.

porn_rec_throw2 years ago
29athrowaway2 years ago

How can this be the most upvoted comment?

What you describe is already illegal on many jurisdictions.

lajamerr2 years ago

Which part is illegal?

Are you talking about porn? Or using potential copyright material in an infringing manner? Or perhaps the possibility of "deepfake" porn?

29athrowaway2 years ago

In some jurisdictions, deepfakes of any kind are treated as a form of defamation. In others, adult deepfakes specifically are illegal.

+1
pmoriarty2 years ago