Good to see LucidRains get the love he (rightly) deserves. He's a beast!
As a thank you to him -- he also does for work for commission/etc, check hus GitHub page for more info. I'm not fiscally or currently otherwise directly linked to him too closely, I've just hung around a while and think he deserves far more credit than he gets. This is literally the smallest piece of the pie of what the man does across several subdisciplines, send him a thank-you please, if possible!
What is the reason Google published their research details about Imagen?
Why don't they just keep their findings to themselfes and build products on top of them?
Public companies can't do stuff just for the fun of it, right? So there must be some commercial reasoning behind it?
Two reasons: 1) Even though it's all technically very impressive, so far there's not a huge amount of commercialization potential here. OpenAI is charging for its GPT-3 model but its revenue is probably negligible next to the hardware costs (sunk + ongoing) to train it in the first place, let alone the researcher salaries they're paying 2) Most of the stunning examples are cherry-picked. These things fail much more often than they are willing to admit, and they're probably assuming (correctly) that not enough people are willing to pay for something that only-sorta-kinda-works 1/3 of the time, when you're holding it the right way.
I'm currently working fulltime on AI-powered design suite Accomplice (https://accomplice.ai) and if you ask me on a good day I would tell you I do think there's already huge commercial potential. On a bad day, though ;)
My current approach is a "model marketplace" (https://accomplice.ai/models) where the most popular open source text-to-image models (VQGAN+CLIP, Disco Diffusion, DALL-E Mega coming soon…), sit alongside the most popular open source style transfer models, and then finally I have the ability for a user to finetune their own models using a simple drag-and-drop tool (https://accomplice.ai/no-code-model-training).
Using this approach a user has enough models to try or train that they can have a higher hit rate. For example, Accomplice currently has finetuned models for photo realistic people (https://accomplice.ai/models/f58bfa91-bb18-406f-a0e1-db00fcf...), watercolor backgrounds (https://accomplice.ai/models/91b8a080-faca-4ff4-8b11-64b0789...), etc…
So theoretically if there were a searchable marketplace of 100s of different finetuned models people could choose from, they would use it much like an iStockPhoto and be able to create the kind of images they want instead of just downloading them.
But it's of course a constant work in progress. Slowly growing though and lots of promising stuff ahead!
I meant commercialization potential for companies like Google, where anything less than a hundred million is probably a failure :) Hopefully for the non-Googles of the world (i.e. you), there's a good pathway forward!
I wanted to try your site, but after clicking on the link in the email, it tries to send me through some redirectingat.com link which is blocked by my ad blocker.
What does that link do?
That will get turned back on automatically in an "update".
Same thing happened to me multiple times across multiple platforms: SendGrid, Mandrill, MailJet, MailGun. I always turn off the tracking (enabled by default on all of them), but magically its back on a few weeks/months later. I've given up finding a solution and just revisit my settings every few months to check on it.
Your headshots model's outputs look creepy. The eyes are off; e.g. the young african girl. You may want to tune your loss functions to fix that.
Most of the blue eyed ones have the Eyes of Ibad from Dune. (And several of the ones that aren't blue eyed also have weird things going on in the sclerae.)
The two “african american” ones look south or maybe southeast asian (and the one of those that is a “young...girl” looks like a, maybe young, adult.)
All the ones without a racial/ethnic prompt are white, and disproportionately blue eyed (again, including sclerae.)
(It indicate “diverse”, and yet all of the examples read white or Asian, though the unlabeled darker-skinned male figure in the group of six at the top is ambiguous enough to be plausibly be something else.)
The “beautiful woman with curly red hair” has rather radical facial asymmetry, and straight to slightly wavy hair.
Cool project! Will follow along
*throws money at his screen*
Why would people do this when aspiring artists are practically giving their real photos and paintings for free on places like deviant art? Why further commoditize something that's already been commoditize to practically free?
I'm an artist, and I'd absolutely love to use something like this to inspire me or to give me something to continue working with on my own.
In one sense it's kind of like a much "smarter" photoshop filter, where it can make your own art/photos look more like what you want (ex: Van Gogh, Dali, Picasso, or combinations of those, or something completely weird/new/different).
You could also train the models on your own work and have it generate art in your own style that could inspire you or could be useful to you either as a base to work from or that you could take interesting elements from to create new art.
Similar things can be done in music, by the way, and that would be really useful to musicians too.
Poets could use something like this to create poetry, novel writers to write novels, etc..
This is really an improvement on the collaboration potential between humans and computers -- which is probably why it's called "Accomplice".
Well, I'm focusing on business use cases – stock images and how Accomplice could be useful for marketing and content creation.
i.e. The ability to easily take your logo and stylize it: https://accomplice.ai/@adam/iterations/2bcc90ad-3237-486a-8d...
Create a photorealistic avatar whenever you need it: https://accomplice.ai/@adam/iterations/988b7d54-dc39-43b1-b5...
Easily remove the background of a photo: https://accomplice.ai/models/97746c4b-c6f0-49cb-ae1b-859716b...
Upscale a photo: https://accomplice.ai/models/bd4619ee-8202-4cf0-a04e-291820f...
Etc etc. AI can make all this stuff easier. And you have a sense of ownership over what you create. All in one place where you can collaborate on all of it with your team. I feel like that's valuable. It's certainly a tool I've always wanted.
But, also, as a bit of an aside – if the Googles and OpenAIs of the world are just going to bite every artist's style anyway with a mostly black box service and training set… it feels like the option for an artist to train/finetune their own model, promote it and possibly make money off of that is worth trying.
It's also because these companies hire away staff all the time.
Rule 1 of working in this field, recruit a mid level member of the other company's research group biannually to get the latest gossip.
It's impossible to keep a 1 page or shorter "algorithm" secret, when the creators are geniuses and they hop jobs every year or so.
Fellas with more IQ than games in a baseball season, they just don't forget.
Seems exactly false. DALL-E 2 seems to end much of the illustrator industry and if the endless array of Twitter posts from early adopters are any indication it works great.
Publishing high impact research gives credibility to the ML teams, which helps recruiting and prestige.
It's less cynical, more incentive alignment.
Also good for society. Less evil, more nice people. Respect Google for these traits.
Oh, they don't even want to create/release a usable image search... Respect, of course.
They could have won that fight, but they don't care enough. I use Yandex for images now, it's very good.
Care to link?
I think there are three main reasons why:
- being open is kind of just how things in ML generally work right now, it's in stark contrast to things like chemistry or physics where paywalls are pretty common
- it's a matter of clout, ML is moving ridiculously quickly, with work from just 5 years ago being considered outdated in terms of capability, if you don't publish, someone else will and they'll get the credit. This likely also matters for the researchers since they get credit too. In a sense this is just publish or perish culture from academia.
- it's also somewhat about hiring, which is related to the clout. By putting out this kind of research, they're attracting talented engineers to consider working for them. This of course is pretty relevant to the rest of their business, especially given how heavily Google leans on AI to handle moderation.
> Public companies can't do stuff just for the fun of it, right? So there must be some commercial reasoning behind it?
Yes they can actually.
If you are a shareholder you can either sue (unlikely to succeed) or vote against the board. That's pretty much the only recourse.
"If you are a shareholder you can either sue (unlikely to succeed) or vote against the board. That's pretty much the only recourse."
Or you could sell or just threaten to sell your shares.
Buying more of the company's shares to take it over is another option.
Good luck doing that with Google.
I think it is because the proprietary part for them is the data, not the particular algorithm. They benefit more from other people making advances on their technology because they have the data to get more benefit than anyone else. If they kept it to themselves, they would get no "free" advancement. So they trade-off the secret of the technique in the hope that others will advance the technique, making their data more valuable.
What would the product be? My experience is that most ML papers are very brittle, for every cool result/example you see there's a plethora of nonsense spit by the models.
Absolute power, corrupts absolutely. Perhaps it’s a game theoretic approach to leveling the playing field.
This is an important question.
You're saying it's Google who have done this research. In a way that's true. But really it is Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet and Mohammad Norouzi who did it, with material support from Google.
It's likely that some or all of these people would have refused to do the work they do if Google kept it all as their secret sauce.
And moreover, there are excellent reasons why they wouldn't want to. It's not just the obvious that if it all were secret, they wouldn't be able to use it in their non-Google career advancement. It's also that research without the freedom to talk is far more difficult and frustrating.
On paper, scientific papers are supposed to document the whole of the discovery/innovation. So you might think that an insider, who got to read all the secret Google research papers AND all the public ones would have an advantage. But problem is, even the best written papers with full code and comments inevitably leave out things, especially of the "why this and not that" type.
If you're a researcher in the free world, you can just ask. Especially if you have a public track record of great papers yourself, they will WANT to talk to you. You can learn so much more from the interactive process of back and forth questions than you can from a static piece of information like a scientific paper.
If you work for a secretive and command-driven organization, you need to be careful about what you reveal of your own research when you ask. You can't talk freely. The thought of having to justify your communication to some old-school IBM lawyer type, is going to chill even the most enthusiastic reseacher. It's easier to just stay in your own corporate bubble, and focus on the things your corporation does well since at least you can talk freely to your colleagues (although in really paranoid organizations like the NSA or old IBM, even that may not be true). But then at best you specialize, at worst you fall behind.
Google started publishing for several reasons but the primary one was recruitment (showing off was a secondary goal). The mapreduce, GFS, and bigtable papers played an important role in attracting an early generation of distributed computing/high performance computing people from around the valley and CMU/MIT, who helped build the second really successful versions of the web search engine (retrieval and ranking), ads serving (the auction, the logs joining pipeline), etc.
The other reason is that the leaders at Google at the time believed that we would achieve the singularity faster if Jeff Dean periodically sent ideas back 10 years in time to Doug Cutting.
1. This finding is hard to monetize, especially with ROI that Google typically does (for example a an app that makes $500 a month isn't worth it)
2. Deploying models in a cost effective way is hard
3. Lessons learned from building this model can indeed be monetized and many of them may be kept secret.
It’s simply flag planting. If they don’t do it, some other player will.
Researchers like to talk about and show off their work outside the company. If you don't let them, they get unhappy and leave.
It could be classed as a way to attract talent. It is basically then a very expensive fusball table.
Because the competitive advantage is also in the training datasets and ML infrastructure.
You realize this doesn't include the trained data, right?
They did it out of goodness of their hearts. :)
Recent and related:
DALL-E 2 open source implementation - https://news.ycombinator.com/item?id=31228710 - May 2022 (152 comments)
Also:
X-Transformers: A fully-featured transformer with experimental features - https://news.ycombinator.com/item?id=27089208 - May 2021 (37 comments)
Text to Image Generation - https://news.ycombinator.com/item?id=26615791 - March 2021 (88 comments)
How much would it cost to train something like this? Is there even a good dataset for it?
There is a dataset of 5 billion image-text pairs (laion-5b) scraped by various parties. This can then be filtered and used to train these models. Cost is a bit of an issue but there are orgs that have provided compute for open model training. And Imagen is nice because the text encoder part is already available and doesn't need more training, so it would just be the diffusion model components being trained. I'd guess we'll see a biggish training run starting in a few weeks.
I hope so. It's a bit cruel to show off and then lock it away.
When you say a bit of an issue, how much are we talking?
Four or five figures I'd guess? I'm not clued up on costs/performance for TPU stuff to give a better estimate, but guessing at a week on a 256 TPU pod, call it $30k?
While that is what they did, they also used a batch size of 2048 while training. This is just to speed training up, not a hard requirement. It's easy for Google to justify more money on compute to save engineer iteration loops.
I'll have to read the paper for more details, but it would almost certainly cost less (and take longer) to train a model like this in a more resource constrained situation than Google faces .
Couldn't a bunch of us shell out $5000~$50,000 and do this ourselves? Create a non-profit shell corporation outside US jurisdiction, issue shares, raise funds and open source the result?
The shares would simply be votes towards future training dataset endeavors as no profit would be booked here. Say you buy 5000 out of 500,000 shares, that would give you 1% voting power in what dataset to train.
Are there prefiltered derivatives of Laion-5B available? I can imagine various contraindicated categories you might want to avoid entirely, as well as biases you might want to adjust for by balancing classes in the data (5 billion images gives you a lot of room to balance the dataset).
Who will do the training? Will the training results be made available publicly? why wont it start until a few weeks from now?
I feel like the fact big porn hasn't poached talent and jumped all over this suggest at least 10s of millions. That said some for profit no-rules deepfake service for disinformation and illegal content has to be in the works.
There's a company in Montreal that makes that in a month and also has access to copious amount of said datasets on their servers. It may or may not be that they already have engineers on it. We have no way of knowing since its a private company
Of course the implementation isn’t the issue. It’s the training data and the compute machines.
Open source is pretty meaningless here
In fact, this kind of reverses things, doesn't it?
Open source is built on the assumption that you can do more with source code than with binaries. In the case of AI models, the computed weights of models are what's valuable, and the source code used to achieve them is less useful.
How much would it cost in training to match dall-e 2?
https://twitter.com/alexjc/status/1347458546636619778?s=21&t...
> The blog post says 256 GPUs for 2 weeks, so:
> DALL·E would cost $131,604 to train on AWS, assuming a p3.16x-large at market rates. Could be as low as $40k if you already paid for reserved instances.
Images actually used our open LAION 400m dataset plus 400m of their own.
On compute there is more than enough compute available to open source now via LAION and Eleuther AI to train these models, will just a bit of time.
I guess the stock photos industry will be disrupted by this.
r/wtfstockphotos will be full of this
And medium posts about React hooks and Go generics are gonna be full of “Animated Drake meme but with Rick Astley” kind of fun.
How big is getty? I think this is a billion dollars business.
Definitely. With the right training set, video game assets too. Imagine being able to generate N variations of any asset in any video game style...
They can make more profit if they don't need to pay artists
Jesus! Most of us have their Github contributions map look like Midwest cornfields. LucidRain's looks like downtown Manhattan
I did the pip installs and installed Cuda. I changed prompt and ran the sample code. It ran to completion. How do I save the image from trainer.sample?
You'd have to train it first.
I wonder if this implementation was trained on the same dataset that trained DALL-E2, would it produce results of comparable quality.
I have to clarify I am visioning DALL-E-2-PORN.
Couldn't you just scrape porn to get copious amount of dataset? Scope would be narrower and thus require less classification and in general it has common themes.
I'm more concerned how expensive it will be to train it on GPU instances. We are looking at A6000s right? That's like $15/hr.
"just"
All the complexity wrapped up into one word.
You do realize web scraping has gotten very cheap at scale and easy now too. It's an after thought for me, I'm more concerned with the economics of training, it can't be cheap
You don't realize how hard it is to make an inclusive, cleaned up dataset. Take a look at this Notion from BigScience detailing their workgroups. Three of them are related to preparing the dataset.
https://bigscience.notion.site/10743770aae24ff3bdc1b938cf454...
And this is just for a text-only scrape.
Which is why porn is such a great dataset for crowdsource:
- lots of people are stimulated by it
- lots of people want DALL-E-2 for porn
- and lots of people are willing to work towards that common goal
The beauty of this is that people are just going to keep coming and coming to it.
Like I'm trying to be mature and serious about this. What's it going to take?
- Community responsible for scraping dataset, generating image dataset from moving pictures, upscaling said dataset.
- Crowdsourced labeling, cleaning, normalizing dataset
- Crowdfunding to train, host, and publish.
All of the above are not easy by any means but its much more achievable than trying to build a generic DALL-E that can create anything.
Really my point is that the scope of the dataset has narrower range in terms of desired output where as DALL-E-2 casts a far wider net.
Specialization is key here.
There's a section of the internet where you can easily find billions of images or can be generated from moving pictures. Even upscaled. I really didn't expect to have to spell it out.
hint: they are all about one thing and there are a lot of eager volunteers to help on those websites. It would be easy to "normalize/clean/classify" because the pictures would have a consistent theme, thus reducing the amount of parameters.
We are not trying to generate elephants getting railed on a SpaceX rocket flying in oil painting style (although I'm sure theres people into that and its not my place to judge), we are just trying to remove the human cost out of this necessary evil.
I can't believe nobody is investing in "DALL-E-2-4-PORN". This sounds like an X amount of money thrown at a hugely sticky product that can be iterated (with the current trend in hardware) to the point where it literally generates billions of dollars in revenues for ages to come (no pun intended).
Already does. It works by magic, we put images on one side and text on the other side and then power it up. Let it simmer for a few months at megawatt power levels.
Reddit has plenty of nsfw categories.
And tags.
Insta has tags.
Plenty of others have nsfw and tags
if you have to ask, the answer would be irrelevant.
Sure, you could use images you do not have the legal rights to if you do not release anything at the end / just leak the final result. But this is a lot of effort and resources invested for something that would effectively never be shared on the clear web.
What's it going to take to train this on porn? This is something that can be crowdfunded.
Look deepnude is a thing and somebody is making money off it: https://app.deepnude.cc/upload
Lol. Not the same thing obviously but your question just reminded me of @robotpornaddict, a neural net that watches porn and tries to describe what it sees.
They specifically mention in the Images documentation that one reason they haven't publicized this yet is due to porn or deep fakes risk. Assuming this works very well already.
And what is going to take to generate videos, not pictures?
To run we must learn to walk first. To walk we must learn to be erect. To be erect we must master crawling.
I don't think the leap is too crazy if we are talking short moving pictures without sound. However, when sound gets involved, this is where it would become very tricky.
"when sound gets involved, this is where it would become very tricky"
I don't think a neural net would have much trouble generating moans in sync to the motion.
Why pay humans for all that fake moaning when an AI could do it?
actually you are right, sound generation here wouldn't be as hard as I originally thought.
Maybe if you study hard enough, one day you'll have what it takes to make a real girl.
This tech is actually pretty bad at human faces - it all appears unnatural and distorted. You get things like the occasional extra earlobe...
Human faces were excluded on purpose from the DALL-E 2 training set in order to prevent misuse. I suppose the same will be the case here (or at least, in public-facing versions of the models).
Given how it renders dog faces, I don't see why it wouldn't be good at human faces too, if trained for it.
While the "it will never do as well as humans at this one thing I feel strongly about" bias is still a common one, even on hn, at this point I am fairly certain that we will soon all have to live with the fact that we are no longer all that special in regards to, well, absolutely everything.
One of my more interesting realisations over this development is that being human is apparently a religion to a lot of otherwise secular humans.
"we are no longer all that special in regards to, well, absolutely everything"
If/when general AI comes about, maybe so.
Until then all we've got are a bunch of highly specialized tools/helpers/slaves that may be good (in some sense) at one thing and awful at pretty much everything else.
You could argue that humans themselves are an ensemble of such highly specialized parts that are more than the sum of their parts in some ineffable way that's more of a "we know it when we see it" than something that's formalizable. Machines/computers lack that... so far.
How can this be the most upvoted comment?
What you describe is already illegal on many jurisdictions.
Which part is illegal?
Are you talking about porn? Or using potential copyright material in an infringing manner? Or perhaps the possibility of "deepfake" porn?
In some jurisdictions, deepfakes of any kind are treated as a form of defamation. In others, adult deepfakes specifically are illegal.
Korea. Hardcore pornography is also illegal there.
Keep in mind this is a country where if you leave a bad review after you get scammed by someone with evidence, it is defamation. So not quite leadership the world needs in this industry.
Really sad to see ppl on HN flagging all of my comments on this thread. I mean it's not like you can't find celebrity deepfakes including Kpop.
The cat is out of the bag and its only going to get better and faster from here whether some cultures/jurisdictions take offense or not.
Second this, one of the biggest contributors to open source AI and someone everyone should sponsor.