Is training with user-generated content a way to launder copyrighted images? That is, if I upload an image of Ironman or whatever to my Facebook or Instagram page as a public post and Meta trains their model on that data, is there wording in my user agreement that says that I declare that I own the content, which then gives Meta plausible deniability when it comes to training with copyrighted material?
(apologies for the run-on sentence - it is early still)
Before anyone tries it out from the EU, be warned that it will push to make a Meta account and merge any Facebook/ Instagram profiles together and once you’ve finally bitten that bullet, it will tell you that it isn’t available in your region.
Strangely, it appears to decide this based on IP geolocation. My account is listed as US-based but the site does not work when using non-US VPN.
I wouldn’t be surprised if they have several layers of location checks and if any of them fail they bail. Typically with geolocation on projects I’ve done we will rely on the best available info - location permission, IP geolocation, or the user telling us where they are via form.
It took me about 10 minutes to do what should have been a single click. They even wanted me to generate a new password even though I login with facebook. In US though and had some fun. It isn't as good as ChatGPT but very impressive.
Same in Canada
Link to tool: https://imagine.meta.com/
One of many AI updates from Meta yesterday: https://about.fb.com/news/2023/12/meta-ai-updates/#:~:text=E...
I clicked the link to try generating an image.
Several beautiful modal dialogs later, my Meta account has been linked to my Facebook account, my Oculus profile is now my Horizon profile, and I have chosen a publicly viewable(!?) display name for my Horizon profile (a profile for a game I have never played and never intend to play). I have been informed that my Oculus friends are now Horizon followers, given the chance to select "how social [I] want to be," asked to invite my Facebook friends to join Horizon -- and I still haven't generated an image. I almost feel like this image generator is somehow a long con to get people to update their Meta accounts.
I want to find the group of product managers responsible for this user journey and just... shake them out of it! The design you shipped is really dumb! None of this makes sense outside of Meta! There's a whole world out here! Nobody cares about Horizon Worlds!
This is also called "shipping the org chart".
People say that a lot around me. What does it mean?
“Shipping the org chart” refers to a phenomenon in product development where the structure of an organization is reflected in its products. This concept suggests that the design and functionality of a product can inadvertently mirror the internal structure of the company that created it.
Bet Horizon worlds will get great engagement metrics for the next earnings release on the topic.
It's almost beginning to resemble the UX of an enterprise application!
Note that you need to "Log On" to Facebook/Meta/WhateverTheyCallThemselvesNow to try it. Kind of curious, but not curious enough to create yet another burner Facebook account.
[edit: still learning to spell]
Thanks I should've read your post before opening the link and promptly having to close it.
How are you guys even making burner facebook accounts? Every time I try (though granted, I haven't tried since 2016), I get stymied by a hard phone number requirement.
You can get a phone number in almost every country relatively cheaply. I used one of such service (I think it was Numero or something) to order from CVS because I live in Europe and they need an American phone number (and American bank card, but that’s another story).
I tried it now.
My experience:
Took 4 minutes to log in and do one generation. (Login to FB, then it took me through a process to merge accounts with Meta, which didn't sound good, so I restarted with 'sign in via email' which ended up doing the same thing anyway, I think. Then I was logged in, did the generation.)
My at a glance is that it's:
For image quality
1. Midjourney
2. Dall e 3
3. SDXL and this
For overall ease of use and convenience
1. Dall e 3
2. Midjourney
Of course, this is all biased personal opinion, and YMMV.
Depends what you want really, Midjourney and Dall-E 3 have specific looks to them which kind of look cheap/tacky now its everywhere.
SDXL is reconfigurable and completely flexible so really its the only tool in the game for pure creativity.
For hackernews users definitely comfyui. Good place to start playing arround with tlcheckpoints, loras, controlnets and ipadapters.
What is the best tool wrapping SDXL
There is no best, it depends on your usecase. Auto1111 is popular, ComfyUI extremely flexible but complex, and there is a myriad of other wrappers, some with a focus on simplicity, some not so much.
Depends what you mean by "best", but Fooocus is very accessible for getting started with Stable Diffusion.
I find Automatic1111 better for point and click simplicity. ComfyUI has been good for custom flows.
Also Automatic1111 is more centralized, so you have to wait for something to make its way in (or a pull request for it anyhow), whereas people put up their ComfyUI custom JSON workflows. So I am doing Stable Diffusion video via ComfyUI right now, whereas it has not made its way into Automatic1111.
On Windows, StabilityMatrix (https://github.com/LykosAI/StabilityMatrix) is a very easy way to get any (or all) of those wrappers installed without conflicts.
Like it or not 4chan's technology forum is one of the forefronts of local image gen/voice/llm model discussions, this is why the most prominent SD interface is written by a 4chan poster.
Considering the author of ComfyUI is called ComfyAnonymous and has an anime avatar it's very likely they're a 4chan poster too so maybe don't use that if it actually bothers you.
Both are very good tools for using this stuff on the cutting edge so I couldn't really care less what the authors do in their free time.
Fooocus is fantastic for working out of the box.
"Not available in your location yet" (Switzerland)
> "Not available in your location yet" (Switzerland)
Have the GDPR questions around data provenance been resolved? I thought EU/EEA is currently off limits for publicly- or user-data-trained AI.
ChatGPT (free and paid) are available in the EU, so I don't think there is a blanket ban.
Different companies might have very different interpretations of the legality of what they're doing, of course. I don't think there's any precedent, and no explicit regulations – there's an "AI act" being currently discussed in parliament, though.
Not available in my region (New Zealand), darn.
1.1B is tiny.
Given that FB & IG combined have ~0.5B photos uploaded daily, this effectively translates to training data from just a few days of user generated content.
https://www.brandwatch.com/blog/facebook-statistics/#:~:text....
https://www.zippia.com/advice/instagram-statistics/#:~:text=....
They are using only the publicly available photos. Not the ones you share only with friends.
But is it tiny with respect to the volume of data required to create a good model and the compute costs associated with the training and operation of that model?
And once they notice no one gives a sh* they’ll throw 100b at it, then more…..
If you ask it to generate an image of Taylor Swift, it refuses. But if you ask it to generate an image of a popular celebrity singer performing the song "Blank Space", it generates an image that looks exactly like Taylor Swift some fraction of the time.
I wonder if celebrity doppelgangers can't find modeling work. Like, without EVER referencing your celebrity twin, how closely can your work implicitly approach Swifthood before your free expression gets violated? To dramatize for effect:
Can you act in films? Or model a company's products like a guitar/microphone? Or genuinely start a band? Can your credits/band name reference you, if your given name is coincidentally also "Taylor Swift"? Can Facebook AIs train on your Facebook images, and produce a "celebrity female singer" images (with/without a "Blank Space" reference)? What if your LLM's purpose is strictly "parody, caricature, and images whose likeness is purely coincidental"? Can generative AIs have intention? Let alone intention to break copyright?
The consequences are endless in both kind/degree when pretending that "likeness" is some unique fingerprint. Ditto for thought-policing what (artificial or human) neural networks can learn from without paying royalties or whatever. It's all absurd.
What's more, our society must face these issues. We can't dismiss them as all hyperbolic catastrophizing about slippery slopes. Our system is already subjective, inadequate, and incapable of sorting itself out. The situation becomes more dire each day. Given our trend of sacrificing public interest for private greed (e.g. Disney's hatchet job on copyright), I'm worried about our future.
Because it’s trained on “real” people, will it be easier to generate ugly people? I have a hard time convincing DALL-E to give me ugly DnD character portraits.
In order for a model to understand what ugly is, someone or something has to tag training data as “ugly”, I find this to be a complete can of worms
>In order for a model to understand what ugly is, someone or something has to tag training data as “ugly”,
that is a very dated (2008) concept.
the model "understands" that 50% of people are below/above median.
consequently, those that are not "OMG girl ur BEAUTIFUL"-tagged are horse-faced.
It understands that the girl with the profile picture with 200 likes and 2k friends is better looking than the girl with 4 likes and 500 friends.
> It understands that the girl with the profile picture with 200 likes and 2k friends is better looking than the girl with 4 likes and 500 friends.
I'm not very familiar with model training. How does it understand this? Is such information part of the training data?
I fine tuned some checkpoints this year (2023), and that's exactly how it worked.
Unless your model is single focus for humans and faces I find it hard to believe there is specific business logic in the training process around inferring beauty from social engagement. Metas model is general purpose.
Put beautiful/pretty in the negative prompt, should get a similar result without the need for tagging ugly in the training set.
Aren't Insta images heavily edited?
Yes, with filters supplied by Instagram, so they would still have the original camera images.
> Because it’s trained on “real” people, will it be easier to generate ugly people?
In the literature, testing concepts in image generation is asking human graders "which image do you prefer more for this caption?," so the answer is probably no. You could speculate on all the approaches that would help this system learn the concept "ugly," and they would probably work, but it would be hard to measure.
Try asking for asymmetry. The more images of faces you average, the better they look.
“Not available in your location
Imagine with Meta Al isn't available in your location yet. You can learn more about Al at Meta in the meantime and try again soon.”
I wonder why it's region-locked?
Meta's AI stickers also only seem to be available in the US for now (or at least not in WhatsApp in the EU).
AI stickers are in my region (not USA) but imagine is not.
Which region is locked? That might give a clue.
Canada seems to be locked out as well.
Is there anyone outside the US that isn't locked out, or was this a US-only release? Could this possibly have to do with the sanctions on China?
New Zealand is locked out. (Normally we get first dibs on things being a small test market)
Me to Meta: Please let me in(dia), we are locked out too
I got the same from the Netherlands
Norway is blocked, so probably some GDPR issues.
Brazil. So it's unlikely to be GDPR-related, unless they're also treating our LGPD as a special case.
Doesn't do "hard prompts" better than other systems I've tried. Looks pretty similar to them too.
eg: "horse riding an astronaut", "upside-down mini cooper", "kanji alphabet soup".
Ooh, so that's what they are called! I tried to do "bicycle in a jar" and they could do it, but when I did "car in a jar" or "toy car in a jar" all of them failed.
I don't know if that's the right kind of term, it's just a list of prompts I've noticed usually don't work in image generation models - specifically they ignore what you said and just make an image with some of the words in it.
This looks like a SD1.5-like latent diffusion model though. The giveaway is that it can't spell.
Sometimes you need to massage the prompt a bit to avoid it getting distracted. e.g., it took me a few tries to get a family having dinner inside a home aquarium. I'd specified it being a large feature wall aquarium, and it got hung up on showing the dining table in front of the feature wall, rather than literally inside the aquarium.
> It can handle complex prompts better than Stable Diffusion XL, but perhaps not as well as DALL-E 3
This is a interesting statement, as Stable Diffusion XL implementations vary from "worse than SD 1.5" to "Competitive with DALL-E 3."
It depends what you want to gen and what prompting style you prefer. I have found SD 1.5/6 to be far more flexible than SDXL. SDXL seems more „neutered“ and biased towards a specific style (like dalle/midj); but this may change as people train more diverse checkpoints and loras for SDXL.
See, this is totally my opposite experience. SDXL handles styles incredibly well... With style prompting.
Hence my point. SDXL implementations vary wildly. For reference I am using Fooocus.
Interesting. Unlike some other popular image generation training, is there a chance that Meta technically got copyright permission for many/most of the images that were posted to its properties?
I'm thinking: When the user who uploaded the image was also the copyright holder, that might've been covered by an agreement that technically permitted this use by Meta.
(Copyright isn't the only legal issue, though. For example, a person in a photo that someone else uploaded doesn't necessarily lose right to their likeness being used for every purpose to which a generative AI service might be put.)
They probably added a clause to their terms of service retroactively granting them permission to use your images for this purpose.
I was thinking Meta might be on more solid copyright ground here than most large generative AI models have.
(Not that legal ground is going to stop anyone, in a generative AI race worth trillions of dollars.)
Meta is asking me to log in with my facebook account. Then after authenticating with my FB account meta says I don't have a meta account.
Is this all some sort of scam to get me to click accept on whatever godforsaken ToS comes with a meta account? If the FB account is good enough to freakin AUTHENTICATE me then just use that ffs.
I wonder what other purposes FB have used those 1.1B+ publicly visible photos to train models for?
Anything that classifies and/or recommends images will likely be a deep learning model these days.
The images of ourselves have now been absorbed into an AI.
An intelligence that knows a shit ton about a very very large number of people.
I'm not sure if they cut me off for generating too many images or because of the content of my images. Everything is now giving the response "This image can't be generated. Please try something else."
This only started after I put in the prompt manbearpig did 9/11. It was ok with some really weird stuff though
Wow, another reason to delete my accounts.
If nothing else they've done so far hasn't convinced you to delete your accounts, then why would this? They've done worse before.
Really struggles with fingers, probably worse than any AI image generator I've seen so far. Maybe there aren't a lot of finger-showing images on IG and FB!
to me these innovations seem akin to Concept Cars in the Motor industry; there's some utility, until some executive takes it center-stage, and pisses-off most of the core users.
the biggest value in these networks is real User-generated content, you can't beat billions of real users capturing real content and sharing habitually.
even if wording in the Terms permit certain research/usage, you've got market and political climates to consider.
I tried this and was floored how good this was
And weirdly, every image it generates is sort of a combination of your grandma and an influencer on a beach on a tropical island.
All I can say is it’s really fast
This is almost certainly going to be used to generate actual pictures of real people in the nude etc.
For that to work, you need to have a dataset of nudes to start with.
Given that instagram is pretty anti nudity (well women's nipples at least) I'd be surprised if there is enough data to work properly.
Its not impossible, but I'd be surprised.
Doesn't seem to be possible. I tried a variety of real people (Tom Hanks, George Bush, George Washington) and each time got the error "This image can't be generated. Please try something else." It did work with some fictional characters though, namely Santa and Mickey Mouse. I'd rather not try asking for nudes while at work, so I can't attest to that part either way. Though "Sherlock Holmes dancing" looked pretty clearly like Benedict Cumberbatch (though the face was pretty mangled looking).
That has been a thing since 2019's DeepNude, and the world hasn't ended. If anything it has been relegated to obscurity.
Is it obscure or just not in the news you follow? There have been many reports about significant impacts on school students:
https://www.technologyreview.com/2023/12/01/1084164/deepfake...
https://www.cbsnews.com/news/deepfake-nude-images-teen-girls...
https://www.theverge.com/2023/6/8/23753605/ai-deepfake-sexto...
It’s also showing up in elections:
https://www.telegraph.co.uk/news/2023/05/14/turkey-deepfake-...
https://www.wired.com/story/deepfakes-cheapfakes-and-twitter...
I encountered the same stories of people's faces being photoshopped onto nude models when I was a kid back in the 2000s. Deepfakes are nothing new.
Not really, the skill required to do a photoshoot is just copy paste and a bit of the healing brush tool. This is considerably easier than a deep fake. I also disagree with the idea that quality is superior. Resolution is lower, and the seams are often visible. Though ultimately this is subjective: is a high resolution still "better" than a low-resolution video?
The core complaint about deep fakes is, word for word, the exact same complaint about Photoshop: someone might use a computer to produce a image with someone's face pasted onto another person's body (presumably naked and doing a sexual act). People could - and no doubt some did - Photoshop classmate's faces onto nude women's bodies.
> If anything it has been relegated to obscurity.
oh man, if /b/ could read this they would be very upset right now
Its not obscure. There are a bunch of paid apps that allow you to "virtually undress" any image you upload.
Which is already causing pain for a bunch of people.
There are paid apps or websites for lots of obscure things, that's not really a high threshold to clear in today's world.
Yeah the key take away from that sentence was the harm caused, not the obscurity.
At this point who cares honestly. The more ‘fake’ generated nudes out there, means it’s just not going to be a novelty. And if everyone has the ability to generate an image of everyone naked, the value for ‘real’ nudes will go high but it will also be good cover for people who get their nudes leaked.
I’ve wondered about a similar thing. If there was something automatically constantly generating nudes of everyone, surely the noise would desensitize people to the signal.
How's that any different from the gazillions of more or less good "how would you look like older/younger", "how would your kids look like", "how would you look like as barbie" and what not tools? One click to generate a thousand waifus. It's not real, who cares.
Fake celebrity nudes pre-date the internet.
Barriers to entry were a lot higher, and distribution capacity was a lot lower. Surely you can see how the change in that combination could make for a significantly different reality now.
I honestly don't see the problem. Especially since any solution to the non-problem is censorship and big tech monopoly since a FOSS model can't be censored.
A LLM wont be able to estimate the size of my wiener. I can always claim it's the wrong size in the picture.
I really really doubt that. If anything, it'll be nerfed into complete uselessness.
So it's just faces?
Artists...leave.
Canada. It asked me to create Meta account only to tell me that it is "not available in your region".
Fuck you Meta and fuck you Zukerberg.
Yet another reason to steer far clear of anything Meta.
This is extremely shitty to a lot of users.
[flagged]
The title is misleading. It uses publicly available photos, which means it uses the same image as other AI models like GPT, midjiurney ...
Who is gonna use these heavily moderated generators?. You cant even generate a nipple or a famous person. There is almost no control or finetuning. There are zillion of checkpoints, loras, controlnets and ipadapters out there to get almost anything with sd. No filters. You can literally generate whatever you like.
They don't own the copyright, but they do have a "non-exclusive, royalty-free, transferable, sub-licensable, worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works". https://www.facebook.com/help/instagram/478745558852511
If they didn't have that (or something similar) they couldn't serve the image to other users. Well, they could, but without something like that someone will sue them for showing a picture they uploaded to someone they didn't want to see it (or any number of other gotchas).
They store the image or video (host/copy), distribute it over their network and to users (use/run), they resize it and change the image format (modify/translate), their site then shows it to the user (display/derivative work), and they can't control the setting in which a user might choose to pull up an image they have access to (the "publically" caveat)
It sounds like a lot, but AFAIK that's what that clause covers and why it's necessary for any site like them.
It certainly does cover the needs of hosting and display to other users, but it doesn't permit just that. It's expansive enough to let them do just about anything they could imagine with the pictures.
Only insofar as legal precedent has established it to mean that. If someone sues you for a use that hasn't been found in court to fall under this clause it will be more difficult to win that case.
IANAL, and my jargon may be off, but I think that in the scenario where you get sued for something that's been litigated to fall under this clause in the past, you can basically say "even if we assume the evidence and claims are accurate, it's obviously in the clear based on prior cases", if the judge agrees, you win without going to trial, which is a "summary judgement" I think.
On the flip side, if someone is trying to apply the clause in a novel, not previously litigated way, you're way less likely to get that summary judgement and it will have to be argued in court.
It works the other way too, if I wrote a eula that used different phrasing than what's been established prior, say to make it more obviously cover just the normal stuff for user uploaded images, summary judgement is less likely to succeed because no court had ever weighed in on my novel phrasing as covering those actions in that way.
There's also the risk that if you make the phrasing too narrow (specifying resizing of the image) then when a new tech comes along that's reasonable to apply (e.g. some ML process to derive a 3d scene from images, or make them) exactly zero user uploaded images you store at that point could benefit from that until you go back and ask the user to agree to that too. The question then becomes how worth is narrowing the wording when you can accidentally paint yourself into a corner.
Or how about if it had been phrased "display on a monitor" had been used years back pre-smartphone era? You could be sued for making user uploaded media available to view on phones since that wasn't in the license granted to you by your users!
When you cover all the little edge cases, you end up with the seemingly overbroad clause most companies use.
An important thing to remember is that the legal interpretation of a text can differ almost arbitrarily from the plain English meaning of the text as written.
Training generative ML tools is qualitatively different from showing on website, even if both are technically “derivative works”, so this is a massive bait-and-switch. Is it the first time something is acceptable by the letter of pre-existing law but not the spirit?
> Is it the first time something is acceptable by the letter of pre-existing law but not the spirit?
Well .. no. It happens each time that Google et. al find a new way to use your data. It's what all we German "privacy nuts" have warned people about for years and the reason that the older German data protection laws and now EU regulations require you to state exactly what you are doing with data ("purpose limitation"). If companies can just write "oh well, we will use it for something" how can anyone evaluate whether they should accept without knowing the future? Right. They cant.
So, this could be another case of the EU kicking Facebook in the face. We'll see.
You're just stating an agreement between Meta and cowboyscott. The copyright holder of Ironman image never agreed it.
The problem here is cowboyscott doesn't own a copyright of Ironman image. But his uploading of image may match the condition of fair use of US or similar copyright exemption rule in their country's copyright law. It effectively works as copyright laundering.
You don't even really need the middleman - Disney has surely uploaded pictures of Ironman to these sites so it would have them either way.
But I don't know if it's really laundered anything. If you say "Hey Meta AI, make me poster for my cookie company that has Iron man eating my cookies" I'm pretty sure Disney could still sue you. It could still sue you if you instructed a human to draw a picture that had Ironman in it so I don't even know if you need a new legal framework.
Do we even do Fair Use in the US anymore?
DMCA take downs seem to feel that this is not a thing any longer.
They user might upload something that they don’t have rights to.
Technically the user is the one misbehaving, but we, Facebook, and any reasonable court know that users are doing that.
That's why there is a safe harbor provision in DMCA.
Copyright law as it exists today allows one to create transformative works. There is little to suggest that an AI trained on copyrighted works is in any way violating that copyright when inference is run.
This is why you don't also download the music from stories when you download stories, no such agreement with Spotify.
You forgot the "in perpetuity" /s
Another method of copyright laundering is doing ML learning in a country where it doesn't protected under copyright law.
Personally, I'm on a side of using copyrighted data for machine learning input source doesn't violate copyright. Statistically, learned model for generative Ai doesn't retain even 1 bit of input. It's hard to say NN model data infringe any copyright of the input source. The copyright is applied to the expression, not the process. If the generative AI produces an image that's clearly a copy of a specific Ironman image which existed before the image generation, that's copyright infringement.
"learned model for generative Ai doesn't retain even 1 bit of input". If that was true, it shouldn't be possible to trick the models into regurgitating their source material, but cleary that is possible [0].
[0] https://stackdiary.com/chatgpts-training-data-can-be-exposed...
Very LLMs are quite a different from diffusion models. The model size vs training set size is skewed the other way.
Copyright doesn't require a single bit of input to be shared. You can't avoid copyright by using a paintbrush for example, you're simply creating a derived work. You might still be in violation even if you create an entirely new context around the copied elements or substitute for the original in the market, as was the case with Warhol vs. Goldsmith.
Obviously not every generative output is a copyright violation, but it seems equally clear that there are outputs that would be if they were produced by humans.
I agree with you but I think the argument is flawed. If you think about it like this h265 also just steals 10% (or whatever the compression ratio is) of an artifact
[dead]
>learned model for generative Ai doesn't retain even 1 bit of input.
It does. The data is just obfuscated.
When an image us uploaded is it re-licensed:
So if you delete your image the entire trained data set is invalid because they no longer have license to the copyright?
If having copyright were a prerequisite of training data this would be true.
But in the US this hasn't been tested in the courts yet, and there's reason to think from precedent this legal argument might not hold (https://www.youtube.com/watch?v=G08hY8dSrUY - sorry don't have a written version of this).
And the lawsuits so far aren't fairing well for those who think training should require having copyright (https://www.hollywoodreporter.com/business/business-news/sar...)
Most people in the fanfiction community recognize that it's probably not strictly allowed under copyright. However, the community response has generally been to do it anyway and try to respect the wishes of the author. Hence why you won't find Interview with a Vampire fanfiction on the major sites.
If anything, I think that severely hinders the pro-AI argument if fanfiction made by human authors are also bound by copyright.
ETA: I just tested it out and you can totally create Interview with a Vampire fanfiction with Bing Compose. That presumably is subject to at least as strong copyright as human authors and is thus a copyright violation.
> if we use a very strict interpretation of copyright, then things like satire ... would be in jeopardy.
Satire, criticism, reviews and journalism are explicitly permitted under fair use.
If I wish to publicly express my disdain or praise for your art, it is necessary that I can show samples / pictures/ photos when I express whatever my deal is.
The difference is when writing satire its not strictly necessary to possess the work to do so. You can merely hear of something and make a joke or a fake story. Training data on the other hand uses the actual material not some derivative you gleamed from a thousand overheard conversations.
> So if you delete your image the entire trained data set is invalid because they no longer have license to the copyright?
The portion of the training set might. The actual trained result -- the outcome of a use under the license -- would, at least arguably, not.
Of course, that's also before the whole "training is fair use and doesn't require a license" issue is considered, which if it is correct renders the entire issue moot -- in that case, using anything you have access to for training, irrespective of license, is fine.
Let's say you post an image, and I learn something by viewing it, then you delete the image. Is my memory of your now deleted image wiped along with everything I learned from viewing it?
I have seen plenty of images on the internet where I would gladly accept this as thing. Unfortunately, what's been seen, can't be unseen.
Unfortunately computer memory, unlike your memory, is so easily wiped. Having the infrastructure in place to make sure it happens on the other hand, seems more like human memory.
Now that is a multi-million dollar question.
How derived data is handled after copyright is revoked is a question thats hard to answer.
I suspect that the data will be deleted from the dataset, and any new models will not contain derivatives from that image.
How legal that is, is expensive to find out. I suspect you'd need to prove that your image had been used, and that it's use contradicts the license that was granted. It would take a lot of lawyer and court time to find out. (I'm not a lawyer, so there might already be case history here. I'm just a systadmin who's looking after datasets. )
postscript: something something GDPR. There are rules about processed data, but I can't remember the specifics. There are caveats about "reasonable"
> Now that is a trulti-million dollar question.
Huh? I think you want s/(:?m[^m]*)m/tr/
Yeah, derative works in this case afaik was always be meant as "we can generate thumbnails etc" and not "we will train our AI with it". I am pretty sure this is illegal in many countries...
I think Meta is already assuming that there will be no liability for training with copyrighted material. I find it very unlikely that image owners will win the AI training battle.
I'd be extremely surprised if the "Mickey Mouse standing on the moon" example image was a legitimate way to "launder copyright".
The interesting question is just who will be liable for the copyright violation: The party that hosts the AI service? The party that trained it on copyrighted images? The user entering a prompt? The (possibly different) user publishing the resulting image?
Here the problem isn't that the AI was trained on Mickey, but that it generated Mickey. The generated images can still violate copyright if too similar to copyrighted artwork - if published.
I think AI companies are working hard on preventing generated images from being similar to training images unless the user very explicitly asks the result to look like some well known image/character.
You can violate copyright by intentionally drawing Mickey Mouse, the medium of drawing is not relevant (AI can be considered a medium, as much as a digital camera is a medium)
I can draw as many Disney characters as I want to and Disney has no recourse as long as I'm not publishing them somewhere.
Clearly you're still living in a pre-Neuralink™ world.
Yes, but importantly, generating them with the AI trained on Mickey is not.
Tattoo artists also make money off generating infringing content all the time. I thought the issue was not in the generation but in the subsequent usage. Outlawing generation borders on thoughtcrime.
> The interesting question is just who will be liable for the copyright violation
I don't think this is going to be hard for courts. If you borrow your friends copy of a copyright text, got to kinkos and duplicate it, then distribute the results - you are the one violating copyright, not your friend or kinkos.
The same will hold here I think, mutatis mutandis. This is all completely separable from the training issue.
MM will be public domain in Jan.
Only the first movie, the trademark is not expiring.
the author of a novel has a copyright to the contents, but can't trademark the contents of the novel.
the same is true for artwork.
Not going to happen.
When Disney did their copyright extension last time, they had bipartisan influence.
Now Disney is in the middle of the culture war, and there is no Republican that will risk being primaried to support Disney.
Given that you de facto need 60 votes in the Senate, it is not happening.
Some early versions will.
The person getting sued there would be the user of the model, not meta, as much as I wish that wasn't how it is. If you use photoshop to infringe on copyright, you're at fault, not Adobe.
It is in big bold letters right in instagram's terms of service: "We do not claim ownership of your content, but you grant us a license to use it."
This isn't about copyright, it is about the fact that most people don't realize that by posting photos, they are licensing those photos.
A lot of the content posted there isn't owned by the people who post it, that's a big part of the problem.
Ultra shitty corporate interests win again...
I don’t agree in this case. Well, maybe I agree on the ultra shitty corporate part. But these are public photos, and if I’d looked at one it could have some influence, probably tiny, on my own drawings. Seems reasonable that the same would be true of my tools.
If they were scanning my private messages, things would be different.
No. I mean two things:
1 - human experience ends up informing human ingenuity. A sketch of Wile E. Coyote comes from someone’s (Chuck Jones?) experience of dogs and seeing coyotes, plus innumerable experience with things that are funny, constraints from experience of certain features that do or don’t work well on animation cels etc. Perhaps a stray tweak in his ears come from a Rembrandt seen as a child or from a glance at a sketch in progress by the person sitting at the next easel in a drawing class long ago.
In todays’s jargon our experiences are all parts of our training set (though today’s massive RNN models are infinitesimal by comparison).
And I think of my tools the same: a ton of inputs stirred together is fine by me.
2 - a difference is that fb’s model is made from public posts: posts offered for anyone to see. In the human case even my private experiences are part of my “training set.”
It seems like this is still very much a legal gray area. If it's concretely decided in court that generative AI cannot produce copyrighted work then I assume it makes no difference what the source of the copyrighted training material was.
It's not a legal way to "launder" copyrighted images, because for things where copyright law grants exclusive rights to the authors, they need the author's permission, and having permission from someone and plausible deniability is not a defense against copyright violation - the only thing that it can change is when damages are assessed, then successfully arguing that it's not intentional can ensure that they have to pay ordinary damages, not punitive triple amount.
However, as others note, all the actions of the major IT companies indicate that their legal departments feel safe in assuming that training a ML model is not a derivative work of the training data, they are willing to defend that stance in court, and expect to win.
Like, if their lawyers wouldn't be sure, they'd definitely advise the management not to do it (explicitly, in writing, to cover their arses), and if executives want to take on large risks despite such legal warning, they'd do that only after getting confirmation from board and shareholders (explicitly, in writing, to avoid major personal liability), and for publicly traded companies the shareholders equals the public, so they'd all be writing about these legal risks in all caps in every public company report to shareholders.
> However, as others note, all the actions of the major IT companies indicate that their legal departments feel safe in assuming that training a ML model is not a derivative work of the training data, they are willing to defend that stance in court, and expect to win.
I think the move will be to argue fair use, declaring the derivative work to be transformative, and possibly to point out that only a small amount (1%-3%) of the original data is retained.
"Is training with user-generated content a way to launder copyrighted images?" Pretty much.
You are very very unlikely to stumble upon something resembling a training image closely enough for copyright to take effect, and in any event this is not the purpose of these systems. You may be running into trademarked content, but in that case you can not speak of laundry, because you can not use a trademark even if the image is AI generated.
"You are very very unlikely to stumble upon something resembling a training image closely enough for copyright to take effect" That is definitely not the case, and is completely contingent on the prompt matching closely what the training set has in it.
"I think you misunderstand copyright and perhaps conflate it with a trademark" Nope. Quite the contrary.
Training on copyrighted content isn't a copyright violation. Sarah Silverman is currently learning that the hard way.
What about all the photos of people at Disney taking pictures of themselves standing next to Mickey Mouse etc.
I don’t think there’s a question that people are allowed to upload photos like that.
> I don’t think there’s a question that people are allowed to upload photos like that.
Technically, that's a copyright violation. Disney just opts not to enforce their rights for that sort of use.
Similarly, you technically can't take and post pictures of statues, paintings, some buildings, etc., and some rightsholders do enforce their copyright when people do those things.
> Technically, that's a copyright violation.
Outside of things within the scope of fair use would be within Disney’s rights to restrict, but given the actual public policies and guidance on photography at Disney parks, I think there is a very strong case that noncommercial photography (for people are present as paid guests) is permitted by implied license.
> Similarly, you technically can't take and post pictures of statues, paintings, some buildings, etc., and some rightsholders do enforce their copyright when people do those things.
Well, not buildings if they are in or visible from a public place in the US, at least under copyright law. (Photography of some, particularly government, buildings may run afoul of other law.) This may be different in other countries.
> Well, not buildings if they are in or visible from a public place in the US, at least under copyright law.
Ahhh, you're correct. This was apparently changed in 1990. I just hadn't updated my mental model in accordance with that change.
https://www.nolo.com/legal-encyclopedia/copyright-architectu...
At this point all big players assume it's okay to train on copyrighted materials.
If you can[0]crawl materials from other sites, why can't you crawl from your own site?
[0]: "can" in quotes
Because your users have agreed to terms of service that don't mention analyzing the images to train an AI model.
If their legal assumption is it's not a copyright violation to train a model on some image, then it's logical that their ToS doesn't mention it, as they need the user's permission only for the scenarios where the law says that they do.
Within Polish (European?) legislation, an agreement on use of copyright needs to explicitly state in what areas you are allowed to copy/use the copyrighted work. So, e.g. if an agreement didn't explicitly state that a company can use the work in TV (or Radio, or sth), then they don't have the right to do so.
When new mediums are invented (like internet), you need to sign an annex to the agreement extending it to this medium.
Having said that, I would still consider it a fair use to train model on given images, but using the trained model to replicate a specific style etc, would most likely be considered a new medium. (IANAL though)
It is not any different than actual live artist learning from works of others
> Is training with user-generated content a way to launder copyrighted images?
Doubt it. If you upload child porn to Instagram and they distribute it - it's still an Instagram problem, AFAIK.
Child porn is not a copyright issue, so the DMCA safe harbor for UGC doesn't apply, and its criminal, so the Section 230 safe harbor doesn't apply, so its very much not an applicable example as to whether use of UGC in other contexts is a way of leveraging safe harbor protections for content, whether for copyright or more generally.
It's still an Instagram problem if someone uploads copyrighted info and Instagram distributes it...
As long as Instagram follows the DMCA and takes it down, they're covered by Section 230, do I don't know if it's a problem per se.
It literally/legally isn't and is one of the reasons US is king for hosting services like IG. Read Section 230.